Solshine's picture
Update README.md
be6e590 verified
metadata
language:
  - en
license: other
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit

Uploaded model

Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) Gemma 2B and Mistral 7B LLMs were iteratively fine-tuned and tested against eachother as well as basic benchmarking, whereby the Gemma 2B fine-tune emerged victorious. LORA adapters were saved for each model.

V3 here scored better in agriculture-focused prelim testing than V1 or V2 of the Mistral series of fine-tunes for the selected dataset.

This mistral model was trained with Unsloth and Huggingface's TRL library.