Edit model card

Uploaded model

  • Developed by: Deeokay
  • License: apache-2.0
  • Finetuned from model : unsloth/phi-3-medium-4k-instruct-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

README

This is a test model on a the following

  • a private dataset focused for Students in MYP (IB) Program for my niece
  • Works with Ollama create with just "FROM path/to/model" as Modelfile (standard template works no issues)

HOW TO USE

The whole point of conversion for me was I wanted to be able to to use it through Ollama or (other local options) For Ollama, it required to be a GGUF file. Once you have this it is pretty straight forward (if it is in llama3 which this model is)

Quick Start:

  • You must already have Ollama running in your setting
  • Download the unsloth.Q4_K_M.gguf model from Files
  • In the same directory create a file call "Modelfile"
  • Inside the "Modelfile" type
FROM ./unsloth.Q4_K_M.gguf # or which ever GGUF file
  • Save a go back to the folder (folder where model + Modelfile exisit)
  • Now in terminal make sure you are in the same location of the folder and type in the following command
ollama create mycustomai  # "mycustomai" <- you can name it anything u want

This GGUF is based on unsloth/Phi-3-medium-4k-instruct thus ollama doesn't need anything else to auto configure this model

After than you should be able to use this model to chat!

NOTE: DISCLAIMER

Please note this is not for the purpose of production, but results of self tought Fine Tuning

The Special Tokens where kept the same and the training data has the following Template:

<s><|user|>{question}<|end|>
<|assistant|>{answer}<|end|></s> 
Downloads last month
0
GGUF
Model size
14B params
Architecture
llama

2-bit

3-bit

4-bit

6-bit

16-bit

Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.