Edit model card

BawialniaGPT: A Fine-Tuned Phi-3 Model for... Well, Something

Warning: This model is a joke and should not be used for any practical purposes. It was trained on a low-quality dataset and is only intended for entertainment purposes. It's basically an over-engineered markov algorithm.

Model Description

Bawialnia-Qlora is a fine-tuned QLora model based on the Phi-3 model. It was trained on the Bawialnia Telegram Group Dataset, which is notorious for its low quality and lack of context. Despite the challenges, the model was trained for approximately 8 hours on an RTX 4060 GPU, because why not? And yes, it is horribly overtrained.

Model Performance

The model's performance is... questionable. It's not entirely clear what the model is good at, but it's definitely not good at generating coherent or meaningful text. In fact, it just generates random polish-ish garbage. This is not a bug, it's a feature.

Model Statistics

  • Training time: ~8 hours (3 epochs)
  • GPU: RTX 4060
  • Dataset: Bawialnia Telegram Group Dataset (because why not?)
  • Architecture: PHI-3
  • Fine-tuning: QLora

image/png

Model Limitations

  • The model is not suitable for any practical applications.
  • The model may generate nonsensical or offensive responses.
  • The model may not respond at all, or respond with complete gibberish.

Usage

If you're feeling adventurous, you can use the model to generate text. Just don't say I didn't warn you.

Disclaimer

The creators of this model are not responsible for any damage, confusion, or frustration caused by using this model. You have been warned.

Downloads last month
5
Safetensors
Model size
3.82B params
Tensor type
BF16
·