Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ DeepSeek-R1 has been making headlines for rivaling OpenAI’s O1 reasoning model
|
|
23 |
|
24 |
We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to one-fourth its original file size—without losing any accuracy. Tests on an **HP Omnibook AIPC** with an **AMD Ryzen™ AI 9 HX 370 processor** showed a decoding speed of **17.20 tokens per second** and a peak RAM usage of just **5017 MB** in NexaQuant version—compared to only **5.30 tokens** per second and **15564 MB RAM** in the unquantized version—while NexaQuant **maintaining full precision model accuracy.**
|
25 |
|
26 |
-
## NexaQuant Use Case
|
27 |
|
28 |
Here’s a comparison of how a standard Q4_K_M and NexaQuant-4Bit handle a common investment banking brain teaser question. NexaQuant excels in accuracy while shrinking the model file size by 4 times.
|
29 |
|
@@ -42,11 +42,11 @@ Right Answer: 1/4
|
|
42 |
|
43 |
## Benchmarks
|
44 |
|
45 |
-
The benchmarks show that NexaQuant’s 4-bit model preserves the reasoning capacity of the original 16-bit model, delivering uncompromised performance in a significantly smaller memory & storage footprint.
|
46 |
|
47 |
**Reasoning Capacity:**
|
48 |
<div align="center">
|
49 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/
|
50 |
</div>
|
51 |
|
52 |
**General Capacity:**
|
@@ -66,7 +66,7 @@ The benchmarks show that NexaQuant’s 4-bit model preserves the reasoning capac
|
|
66 |
| **IFEval - Prom - Loose** | 30.31 | 25.74 | 28.47 |
|
67 |
| **IFEval - Prom - Strict** | 27.91 | 25.74 | 25.51 |
|
68 |
|
69 |
-
##
|
70 |
|
71 |
NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project. Below, we outline multiple ways to run the model locally.
|
72 |
|
@@ -83,6 +83,7 @@ Execute the following command in your terminal:
|
|
83 |
nexa run DeepSeek-R1-Distill-Llama-8B-NexaQuant:q4_0
|
84 |
```
|
85 |
|
|
|
86 |
#### Option 2: Using llama.cpp
|
87 |
|
88 |
**Step 1: Build llama.cpp on Your Device**
|
@@ -98,6 +99,7 @@ Once built, run `llama-cli` under `<build_dir>/bin/`:
|
|
98 |
--prompt 'Provide step-by-step reasoning enclosed in <think> </think> tags, followed by the final answer enclosed in \boxed{} tags.' \
|
99 |
```
|
100 |
|
|
|
101 |
#### Option 3: Using LM Studio
|
102 |
|
103 |
**Step 1: Download and Install LM Studio**
|
@@ -111,9 +113,10 @@ Get the latest version from the [official website](https://lmstudio.ai/).
|
|
111 |
3. Once loaded, go to the chat window and start a conversation.
|
112 |
---
|
113 |
|
|
|
114 |
## What's next
|
115 |
|
116 |
-
1. This model is built for complex problem-solving, which is why it sometimes takes a long thinking process even for simple questions. We
|
117 |
|
118 |
2. Inference Nexa Quantized Deepseek-R1 distilled model on NPU
|
119 |
|
@@ -124,3 +127,5 @@ If you liked our work, feel free to ⭐Star [Nexa's GitHub Repo](https://github.
|
|
124 |
Interested in running DeepSeek R1 on your own devices with optimized CPU, GPU, and NPU acceleration or compressing your finetuned DeepSeek-Distill-R1? [Let’s chat!](https://nexa.ai/book-a-call)
|
125 |
|
126 |
[Blogs](https://nexa.ai/blogs/deepseek-r1-nexaquant) | [Discord](https://discord.gg/nexa-ai) | [X(Twitter)](https://x.com/nexa_ai)
|
|
|
|
|
|
23 |
|
24 |
We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to one-fourth its original file size—without losing any accuracy. Tests on an **HP Omnibook AIPC** with an **AMD Ryzen™ AI 9 HX 370 processor** showed a decoding speed of **17.20 tokens per second** and a peak RAM usage of just **5017 MB** in NexaQuant version—compared to only **5.30 tokens** per second and **15564 MB RAM** in the unquantized version—while NexaQuant **maintaining full precision model accuracy.**
|
25 |
|
26 |
+
## NexaQuant Use Case Demo
|
27 |
|
28 |
Here’s a comparison of how a standard Q4_K_M and NexaQuant-4Bit handle a common investment banking brain teaser question. NexaQuant excels in accuracy while shrinking the model file size by 4 times.
|
29 |
|
|
|
42 |
|
43 |
## Benchmarks
|
44 |
|
45 |
+
The benchmarks show that NexaQuant’s 4-bit model preserves the reasoning capacity of the original 16-bit model, delivering uncompromised performance in a significantly smaller memory & storage footprint.
|
46 |
|
47 |
**Reasoning Capacity:**
|
48 |
<div align="center">
|
49 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/6618e0424dbef6bd3c72f89a/pJzYVGTdWWvLn2MJtsD_d.png" width="80%" alt="Example" />
|
50 |
</div>
|
51 |
|
52 |
**General Capacity:**
|
|
|
66 |
| **IFEval - Prom - Loose** | 30.31 | 25.74 | 28.47 |
|
67 |
| **IFEval - Prom - Strict** | 27.91 | 25.74 | 25.51 |
|
68 |
|
69 |
+
## Run locally
|
70 |
|
71 |
NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project. Below, we outline multiple ways to run the model locally.
|
72 |
|
|
|
83 |
nexa run DeepSeek-R1-Distill-Llama-8B-NexaQuant:q4_0
|
84 |
```
|
85 |
|
86 |
+
|
87 |
#### Option 2: Using llama.cpp
|
88 |
|
89 |
**Step 1: Build llama.cpp on Your Device**
|
|
|
99 |
--prompt 'Provide step-by-step reasoning enclosed in <think> </think> tags, followed by the final answer enclosed in \boxed{} tags.' \
|
100 |
```
|
101 |
|
102 |
+
|
103 |
#### Option 3: Using LM Studio
|
104 |
|
105 |
**Step 1: Download and Install LM Studio**
|
|
|
113 |
3. Once loaded, go to the chat window and start a conversation.
|
114 |
---
|
115 |
|
116 |
+
|
117 |
## What's next
|
118 |
|
119 |
+
1. This model is built for complex problem-solving, which is why it sometimes takes a long thinking process even for simple questions. We recognized this and are working on improving it in the next update.
|
120 |
|
121 |
2. Inference Nexa Quantized Deepseek-R1 distilled model on NPU
|
122 |
|
|
|
127 |
Interested in running DeepSeek R1 on your own devices with optimized CPU, GPU, and NPU acceleration or compressing your finetuned DeepSeek-Distill-R1? [Let’s chat!](https://nexa.ai/book-a-call)
|
128 |
|
129 |
[Blogs](https://nexa.ai/blogs/deepseek-r1-nexaquant) | [Discord](https://discord.gg/nexa-ai) | [X(Twitter)](https://x.com/nexa_ai)
|
130 |
+
|
131 |
+
Join Discord server for help and discussion.
|