Solshine commited on
Commit
be6e590
1 Parent(s): d98ce62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -13,12 +13,17 @@ base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
13
 
14
  # Uploaded model
15
 
16
- - **Developed by:** Solshine
17
  - **License:** Hippocratic 3.0 CL-Eco-Extr
18
  [![Hippocratic License HL3-CL-ECO-EXTR](https://img.shields.io/static/v1?label=Hippocratic%20License&message=HL3-CL-ECO-EXTR&labelColor=5e2751&color=bc8c3d)](https://firstdonoharm.dev/version/3/0/cl-eco-extr.html)
19
  https://firstdonoharm.dev/version/3/0/cl-eco-extr.html
20
  - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
21
 
22
- This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
23
 
24
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
13
 
14
  # Uploaded model
15
 
16
+ - **Developed by:** Caleb DeLeeuw; Copyleft Cultivars, a nonprofit
17
  - **License:** Hippocratic 3.0 CL-Eco-Extr
18
  [![Hippocratic License HL3-CL-ECO-EXTR](https://img.shields.io/static/v1?label=Hippocratic%20License&message=HL3-CL-ECO-EXTR&labelColor=5e2751&color=bc8c3d)](https://firstdonoharm.dev/version/3/0/cl-eco-extr.html)
19
  https://firstdonoharm.dev/version/3/0/cl-eco-extr.html
20
  - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
21
 
22
+
23
+ Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) Gemma 2B and Mistral 7B LLMs were iteratively fine-tuned and tested against eachother as well as basic benchmarking, whereby the Gemma 2B fine-tune emerged victorious. LORA adapters were saved for each model.
24
+
25
+ V3 here scored better in agriculture-focused prelim testing than V1 or V2 of the Mistral series of fine-tunes for the selected dataset.
26
+
27
+ This mistral model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
28
 
29
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)