File size: 1,287 Bytes
4fd7674 80e87cb 28c0459 4fd7674 7c0cfab 21e396c 7c0cfab 21e396c 4fd7674 562d9e1 38459e4 4fd7674 28c0459 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: LeroyDyer/Mixtral_AI_MiniTron_SFT
datasets:
- abacusai/ARC_DPO_FewShot
- abacusai/MetaMathFewshot
- abacusai/HellaSwag_DPO_FewShot
- yahma/alpaca-cleaned
- iamtarun/python_code_instructions_18k_alpaca
- gate369/Alpaca-Star
---
This model is under training ! requires data !! : and time ! (day by day will add 1hr of dolphin Coder/White rabbit (generally and orca/dolphin ) until the datasets are fully overfit! before applying other datasets)
Getting better ! under training ( the base model become the updated version , this model is always in training and changing if a problem happens he can revert to his previous base self: )
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** Mixtral_AI_MiniTron
Recently upated with a few more datasets::::: (Still not ready to talkabout !) (converging quickly on datasets)
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |