Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
unsloth
/
DeepSeek-R1-Distill-Qwen-14B
like
15
Follow
Unsloth AI
3.8k
Text Generation
Transformers
Safetensors
English
qwen2
deepseek
qwen
unsloth
conversational
text-generation-inference
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
What is the difference between this model and the regular distilled 14 billion parameter model?
#1 opened 9 days ago by
kyars