--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: LeroyDyer/Mixtral_AI_MiniTron_SFT datasets: - abacusai/ARC_DPO_FewShot - abacusai/MetaMathFewshot - abacusai/HellaSwag_DPO_FewShot - yahma/alpaca-cleaned - iamtarun/python_code_instructions_18k_alpaca - gate369/Alpaca-Star --- This model is under training ! requires data !! : and time ! (day by day will add 1hr of dolphin Coder/White rabbit (generally and orca/dolphin ) until the datasets are fully overfit! before applying other datasets) Getting better ! under training ( the base model become the updated version , this model is always in training and changing if a problem happens he can revert to his previous base self: ) # Uploaded model - **Developed by:** LeroyDyer - **License:** apache-2.0 - **Finetuned from model :** Mixtral_AI_MiniTron Recently upated with a few more datasets::::: (Still not ready to talkabout !) (converging quickly on datasets) This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)