gbstox commited on
Commit
6d7cb9c
1 Parent(s): 5c5d3db

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: NousResearch/Nous-Hermes-2-Yi-34B
3
+ datasets:
4
+ - gbstox/agronomy-resources
5
+ tags:
6
+ - Yi-34B
7
+ - instruct
8
+ - finetune
9
+ - agriculture
10
+ language:
11
+ - en
12
+ ---
13
+
14
+ # AgronomYi-hermes-34B
15
+
16
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63042a3d7373aacccd896484/TwXNxFw8zSLuWjiYL41Bj.jpeg" width="500" >
17
+
18
+ # About
19
+ AgronomYi is a fine tune of [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B), which uses Yi-34B as the base model.
20
+ I fine tuned this with agronomy data (exclusively textbooks & university extension guides), full training data set [here](https://huggingface.co/datasets/gbstox/agronomy-resources)).
21
+ AgronomYi outperforms all models on the benchmark except for gpt-4, and consistently beats the base model by 7-9% and the hermes fine tune by 3-5%. I take this to mean that even better results can be acheived with additional fine tuning, and larger models tend to perform better in general.
22
+
23
+ # Benchmark comparison
24
+ [benchmark info here](https://github.com/gbstox/agronomy_llm_benchmarking)
25
+
26
+ | Model Name | Score | Date Tested |
27
+ |------------|-------|-------------|
28
+ | gpt-4 | 85.71% | 2024-01-15 |
29
+ | agronomYi-hermes-34b | 79.05% | 2024-01-15 |
30
+ | mistral-medium | 77.14% | 2024-01-15 |
31
+ | nous-hermes-yi-34b | 76.19% | 2024-01-15 |
32
+ | mixtral-8x7b-instruct | 72.38% | 2024-01-15 |
33
+ | claude-2 | 72.38% | 2024-01-15 |
34
+ | yi-34b-chat | 71.43% | 2024-01-15 |
35
+ | norm | 69.52% | 2024-01-17 |
36
+ | openhermes-2.5-mistral-7b | 69.52% | 2024-01-15 |
37
+ | gpt-3.5-turbo | 67.62% | 2024-01-15 |
38
+ | mistral-7b-instruct | 61.9% | 2024-01-15 |