DeepSeek-R1-DRAFT-Qwen2.5-Coder-0.5B

This model is trained on CODE outputs of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B and is meant to be used only as draft model for speculative decoding.

It's specifically intended for users of 3090/4090, allowing you to run the DeepSeek-R1-Distill-Qwen-32B-Q4_K_M GGUF version with 16k context and speeding up generation without sacrificing more context length or model quality.

Data info

The data consists of code tasks collected from various datasets. It has been trained for 4 epochs on 1400 unique examples, for a total of 4,600,000 tokens per epoch.

Since data generation was done using spare GPU time, I may publish a further trained version later.

Downloads last month
0
Safetensors
Model size
494M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for alamios/DeepSeek-R1-DRAFT-Qwen2.5-Coder-0.5B

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(12)
this model
Quantizations
1 model

Collection including alamios/DeepSeek-R1-DRAFT-Qwen2.5-Coder-0.5B