Intellect V0.1 (1.8B) is a small model that is still under development and has not been extensively tested. We do not recommend deploying it for production use, but it performs well for private applications. Feedback is welcome.
Introduction
We introduce Intellect 1.8B (V0.1), our first-ever reasoning model. It is a full-parameter fine-tune of DeepSeek-R1-Distill-Qwen-1.5B (licensed under MIT), trained using the OpenO1-SFT dataset (licensed under Apache 2.0) along with minor text files containing training data for the languages of Pakistan.
Intellect V0.1 (1.8B) is licensed under Apache 2.0, meaning you are free to use it in personal projects. However, this fine-tune is highly experimental, and we do not recommend it for serious, production-ready deployments.
You can find the FP32 version here.
Usage
Since the data used for training were only one-message-in one-message-out pairs, the model often repeats itself after the user sent a follow-up question.
Also, somtimes, the thinking progress doesn't even start, leading the model to generate complete nonsense, especially if compressed even to only FP16. To mitigate this, please use this slightly modified chat template:
### Instruction:
{{{ INPUT }}}
### Response:
<Thought>
{{{ OUTPUT }}}
Training Details
We used SGD (instead of AdamW) with an initial learning rate of 5.0e-5, which allowed us to train the model with a batch size of 1 and a maximum context length of 4K while staying within our allocated 64GB memory for this project.
Training was completed in under a day, which is why PhantasiaAI was unavailable from 03/02/2025 to 04/02/2025. The service is now fully operational.
Visit our website.
Check out our Character.AI alternative.
Support us financially.
- Downloads last month
- 0
Model tree for XeTute/Intellect_V0.1-1.8B-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B