--- base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** Deeokay - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) # README This is a test model on a the following - a private dataset focused for Students in MYP (IB) Program for my niece - Works with Ollama create with just "FROM path/to/model" as Modelfile (standard template works no issues) # HOW TO USE The whole point of conversion for me was I wanted to be able to to use it through Ollama or (other local options) For Ollama, it required to be a GGUF file. Once you have this it is pretty straight forward (if it is in llama3 which this model is) Quick Start: - You must already have Ollama running in your setting - Download the unsloth.Q4_K_M.gguf model from Files - In the same directory create a file call "Modelfile" - Inside the "Modelfile" type ```python FROM ./unsloth.Q4_K_M.gguf # or which ever GGUF file ``` - Save a go back to the folder (folder where model + Modelfile exisit) - Now in terminal make sure you are in the same location of the folder and type in the following command ```python ollama create mycustomai # "mycustomai" <- you can name it anything u want ``` This GGUF is based on unsloth/Phi-3-medium-4k-instruct thus ollama doesn't need anything else to auto configure this model After than you should be able to use this model to chat! # NOTE: DISCLAIMER Please note this is not for the purpose of production, but results of self tought Fine Tuning The Special Tokens where kept the same and the training data has the following Template: ``` <|user|>{question}<|end|> <|assistant|>{answer}<|end|> ```