Melvin56 commited on
Commit
d5f7f03
·
verified ·
1 Parent(s): 10d56dc

Upload model via Google Colab

Browse files
.gitattributes CHANGED
@@ -33,3 +33,13 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ deepseek-r1-redistill-qwen-7b-v1.1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ deepseek-r1-redistill-qwen-7b-v1.1-Q2_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ deepseek-r1-redistill-qwen-7b-v1.1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ deepseek-r1-redistill-qwen-7b-v1.1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ deepseek-r1-redistill-qwen-7b-v1.1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
41
+ deepseek-r1-redistill-qwen-7b-v1.1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ deepseek-r1-redistill-qwen-7b-v1.1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
43
+ deepseek-r1-redistill-qwen-7b-v1.1-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ deepseek-r1-redistill-qwen-7b-v1.1-fp16.gguf filter=lfs diff=lfs merge=lfs -text
45
+ imatrix.dat filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ train: false
4
+ inference: true
5
+ pipeline_tag: text-generation
6
+ base_model:
7
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
8
+ ---
9
+ This is a version of the <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B">DeepSeek-R1-Distill-Qwen-7B</a> model re-distilled for better performance.
10
+
11
+ ## Performance
12
+
13
+ | Models | <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B">DeepSeek-R1-Distill-Qwen-7B</a> | <a href="https://huggingface.co/mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-7B-v1.1">DeepSeek-R1-ReDistill-Qwen-7B-v1.1</a> |
14
+ |:-------------------:|:--------:|:----------------:|
15
+ | ARC (25-shot) | <b>55.03</b> | 52.3 |
16
+ | HellaSwag (10-shot)| 61.9 | <b>62.36</b> |
17
+ | MMLU (5-shot) | 56.75 | <b>59.53</b> |
18
+ | TruthfulQA-MC2 | 45.76 | <b>47.7</b> |
19
+ | Winogrande (5-shot)| 60.38 | <b>61.8</b> |
20
+ | GSM8K (5-shot) | 78.85 | <b>83.4</b> |
21
+ | Average | 59.78 | <b>61.18</b> |
22
+
23
+ | Models | <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B">DeepSeek-R1-Distill-Qwen-7B</a> | <a href="https://huggingface.co/mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-7B-v1.1">DeepSeek-R1-ReDistill-Qwen-7B-v1.1</a> |
24
+ |:-------------------:|:--------:|:----------------:|
25
+ | GPQA (0-shot) | 30.9 | <b>34.99</b> |
26
+ | MMLU PRO (5-shot) | 28.83 | <b>31.02</b> |
27
+ | MUSR (0-shot) | 38.85 | <b>44.42</b> |
28
+ | BBH (3-shot) | 43.54 | <b>51.53</b> |
29
+ | IfEval (0-shot) - strict | <b>42.33</b> | 35.49 |
30
+ | IfEval (0-shot) - loose | 30.31 | <b>38.49</b> |
31
+
32
+ ## Usage
33
+ ```Python
34
+ import torch
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+ compute_dtype = torch.bfloat16
37
+ device = 'cuda'
38
+ model_id = "mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-7B-v1.1"
39
+
40
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=compute_dtype, attn_implementation="sdpa", device_map=device)
41
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
42
+
43
+ prompt = "What is 1.5+102.2?"
44
+ chat = tokenizer.apply_chat_template([{"role":"user", "content":prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt")
45
+ outputs = model.generate(chat.to(device), max_new_tokens=1024, do_sample=True)
46
+ print(tokenizer.decode(outputs[0]))
47
+ ```
48
+
49
+ Output:
50
+ ```
51
+ <|begin▁of▁sentence|><|User|>What is 1.5+102.2?<|Assistant|><think>
52
+ First, I need to add the whole number parts of the two numbers. The whole numbers are 1 and 102, which add up to 103.
53
+
54
+ Next, I add the decimal parts of the two numbers. The decimal parts are 0.5 and 0.2, which add up to 0.7.
55
+
56
+ Finally, I combine the whole number and decimal parts to get the total sum. Adding 103 and 0.7 gives me 103.7.
57
+ </think>
58
+
59
+ To add the numbers \(1.5\) and \(102.2\), follow these steps:
60
+
61
+ 1. **Add the whole number parts:**
62
+ \[
63
+ 1 + 102 = 103
64
+ \]
65
+
66
+ 2. **Add the decimal parts:**
67
+ \[
68
+ 0.5 + 0.2 = 0.7
69
+ \]
70
+
71
+ 3. **Combine the results:**
72
+ \[
73
+ 103 + 0.7 = 103.7
74
+ \]
75
+
76
+ **Final Answer:**
77
+ \[
78
+ \boxed{103.7}
79
+ \]<|end▁of▁sentence|>
80
+ ```
81
+
82
+ ## HQQ
83
+ Run ~3.5x faster with <a href="https://github.com/mobiusml/hqq/">HQQ</a>. First, install the dependencies:
84
+ ```
85
+ pip install hqq
86
+ ```
87
+
88
+ ```Python
89
+ import torch
90
+ from transformers import AutoModelForCausalLM, AutoTokenizer
91
+ from hqq.models.hf.base import AutoHQQHFModel
92
+ from hqq.core.quantize import *
93
+
94
+ #Params
95
+ device = 'cuda:0'
96
+ backend = "torchao_int4"
97
+ compute_dtype = torch.bfloat16 if backend=="torchao_int4" else torch.float16
98
+ model_id = "mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-7B-v1.1"
99
+
100
+ #Load
101
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
102
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=compute_dtype, attn_implementation="sdpa")
103
+
104
+ #Quantize
105
+ quant_config = BaseQuantizeConfig(nbits=4, group_size=64, axis=1)
106
+ AutoHQQHFModel.quantize_model(model, quant_config=quant_config, compute_dtype=compute_dtype, device=device)
107
+
108
+ #Optimize
109
+ from hqq.utils.patching import prepare_for_inference
110
+ prepare_for_inference(model, backend=backend, verbose=False)
111
+
112
+ ############################################################
113
+ #Generate (streaming)
114
+ from hqq.utils.generation_hf import HFGenerator
115
+ gen = HFGenerator(model, tokenizer, max_new_tokens=4096, do_sample=True, compile='partial').warmup()
116
+
117
+ prompt = "If A equals B, and C equals B - A, what would be the value of C?"
118
+ out = gen.generate(prompt, print_tokens=True)
119
+
120
+ ############################################################
121
+ # #Generate (simple)
122
+ # from hqq.utils.generation_hf import patch_model_for_compiled_runtime
123
+ # patch_model_for_compiled_runtime(model, tokenizer, warmup=True)
124
+
125
+ # prompt = "If A equals B, and C equals B - A, what would be the value of C?"
126
+ # chat = tokenizer.apply_chat_template([{"role":"user", "content":prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt")
127
+ # outputs = model.generate(chat.to(device), max_new_tokens=8192, do_sample=True)
128
+ # print(tokenizer.decode(outputs[0]))
129
+ ```
deepseek-r1-redistill-qwen-7b-v1.1-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c4bff73ab778938195e7c856f4751da2e207bb453092388f28b947b696297f3
3
+ size 3015940544
deepseek-r1-redistill-qwen-7b-v1.1-Q2_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e9d6aee55a86974069d8ea03ddfab2be1c5440e2ce3d728a7ea30c73dc01616
3
+ size 2834074048
deepseek-r1-redistill-qwen-7b-v1.1-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d40c99217ceb34d1a7f7d87472252a77ddb625a04f74d8a2bd7e85716338d08f
3
+ size 3808391616
deepseek-r1-redistill-qwen-7b-v1.1-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ff8ff30ee356532be20a246e7d928adff4aaec89873f298b2751964bc319862
3
+ size 4444121536
deepseek-r1-redistill-qwen-7b-v1.1-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:477c6051b5bb4cbce544b6f0d402d6b120c6920befacc6e62a581e955cece9ad
3
+ size 4683073984
deepseek-r1-redistill-qwen-7b-v1.1-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d2f5a22470c6b85fcdb43791a2136b6e5e5673305b72e5df22759860bc3ccde
3
+ size 5444831680
deepseek-r1-redistill-qwen-7b-v1.1-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:282f8b5bb29b17f98bc5b7ac3f9f4e2876542c7064f49edb919aa4017cf3fd2e
3
+ size 6254199232
deepseek-r1-redistill-qwen-7b-v1.1-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:674565aa9434bc220327614a690ac4b3f7c006c1039653a3a58c935d5de74c0f
3
+ size 8098525632
deepseek-r1-redistill-qwen-7b-v1.1-fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e01b6c3a067974bdf6fbb5668d3ba69a0cf8f27bfd4a414a8f6e73000a206c3
3
+ size 15237853344
imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c618fc38a4065f2ad30214f63fad6f63546158a0a01af6f7d3c4f495a85a9fb6
3
+ size 4536697