BawialniaGPT: A Fine-Tuned Phi-3 Model for... Well, Something
Warning: This model is a joke and should not be used for any practical purposes. It was trained on a low-quality dataset and is only intended for entertainment purposes. It's basically an over-engineered markov algorithm.
Model Description
BawialniaGPT is a fine-tuned QLora model based on the Phi-3 model. It was trained on the Bawialnia Telegram Group Dataset, which is low quality and lacks context. Despite the challenges, the model was trained for approximately 8 hours on an RTX 4060 GPU, because why not? And yes, it is horribly overtrained.
Model Performance
The model's performance is... questionable. It's not entirely clear what the model is good at, but it's definitely not good at generating coherent or meaningful text. In fact, it just generates random polish-ish garbage. This is not a bug, it's a feature.
Model Statistics
- Training time: ~8 hours (3 epochs)
- GPU: RTX 4060
- Dataset: Bawialnia Telegram Group Dataset (because why not?)
- Architecture: PHI-3
- Fine-tuning: QLora
Model Limitations
- The model is not suitable for any practical applications.
- The model may generate nonsensical or offensive responses.
- The model may not respond at all, or respond with complete gibberish.
Usage
If you're feeling adventurous, you can use the model to generate text. Just don't say I didn't warn you.
Disclaimer
The creators of this model are not responsible for any damage, confusion, or frustration caused by using this model. You have been warned.
- Downloads last month
- 8