Quantization made by Richard Erkhov.
deepseek-math-7b-rl - GGUF
- Model creator: https://huggingface.co/deepseek-ai/
- Original model: https://huggingface.co/deepseek-ai/deepseek-math-7b-rl/
Name | Quant method | Size |
---|---|---|
deepseek-math-7b-rl.Q2_K.gguf | Q2_K | 2.53GB |
deepseek-math-7b-rl.Q3_K_S.gguf | Q3_K_S | 2.92GB |
deepseek-math-7b-rl.Q3_K.gguf | Q3_K | 3.22GB |
deepseek-math-7b-rl.Q3_K_M.gguf | Q3_K_M | 3.22GB |
deepseek-math-7b-rl.Q3_K_L.gguf | Q3_K_L | 3.49GB |
deepseek-math-7b-rl.IQ4_XS.gguf | IQ4_XS | 3.56GB |
deepseek-math-7b-rl.Q4_0.gguf | Q4_0 | 3.73GB |
deepseek-math-7b-rl.IQ4_NL.gguf | IQ4_NL | 3.74GB |
deepseek-math-7b-rl.Q4_K_S.gguf | Q4_K_S | 3.75GB |
deepseek-math-7b-rl.Q4_K.gguf | Q4_K | 3.93GB |
deepseek-math-7b-rl.Q4_K_M.gguf | Q4_K_M | 3.93GB |
deepseek-math-7b-rl.Q4_1.gguf | Q4_1 | 4.1GB |
deepseek-math-7b-rl.Q5_0.gguf | Q5_0 | 4.48GB |
deepseek-math-7b-rl.Q5_K_S.gguf | Q5_K_S | 4.48GB |
deepseek-math-7b-rl.Q5_K.gguf | Q5_K | 4.59GB |
deepseek-math-7b-rl.Q5_K_M.gguf | Q5_K_M | 4.59GB |
deepseek-math-7b-rl.Q5_1.gguf | Q5_1 | 4.86GB |
deepseek-math-7b-rl.Q6_K.gguf | Q6_K | 5.28GB |
deepseek-math-7b-rl.Q8_0.gguf | Q8_0 | 6.84GB |
Original model description:
license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL
[🏠Homepage] | [🤖 Chat with DeepSeek LLM] | [Discord] | [Wechat(微信)]
1. Introduction to DeepSeekMath
See the Introduction for more details.
2. How to Use
Here give some examples of how to use our model.
Chat Completion
❗❗❗ Please use chain-of-thought prompt to test DeepSeekMath-Instruct and DeepSeekMath-RL:
English questions: {question}\nPlease reason step by step, and put your final answer within \boxed{}.
Chinese questions: {question}\n请通过逐步推理来解答问题,并把最终答案放置于\boxed{}中。
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/deepseek-math-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id
messages = [
{"role": "user", "content": "what is the integral of x^2 from 0 to 2?\nPlease reason step by step, and put your final answer within \\boxed{}."}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)
Avoiding the use of the provided function apply_chat_template
, you can also interact with our model following the sample template. Note that messages
should be replaced by your input.
User: {messages[0]['content']}
Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']}
Assistant:
Note: By default (add_special_tokens=True
), our tokenizer automatically adds a bos_token
(<|begin▁of▁sentence|>
) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input.
3. License
This code repository is licensed under the MIT License. The use of DeepSeekMath models is subject to the Model License. DeepSeekMath supports commercial use.
See the LICENSE-MODEL for more details.
4. Contact
If you have any questions, please raise an issue or contact us at service@deepseek.com.
- Downloads last month
- 354