Sao10K commited on
Commit
88e6189
1 Parent(s): d3e6f0e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ language:
4
+ - en
5
+ ---
6
+
7
+ GGUF Quants.
8
+ <br>For fp16 Repo, visit: https://huggingface.co/Sao10K/Hesperus-v1-13B-L2
9
+ <br>For Adapter, visit: https://huggingface.co/Sao10K/Hesperus-v1-LoRA
10
+
11
+ Hesperus-v1 - A trained 8-bit LoRA for RP & General Purposes.
12
+ <br>Trained on the base 13B Llama 2 model.
13
+
14
+ Dataset Entry Rows:
15
+ <br>RP: 8.95K
16
+ <br>MED: 10.5K
17
+ <br>General: 8.7K
18
+ <br>Total: 28.15K
19
+
20
+ This is after heavy filtering of ~500K Rows and Entries.
21
+ <br> V2 will see this further reduced down to ~10K after I do a second round of cleaning.
22
+
23
+ Applicable Formats:
24
+
25
+ ShareGPT / Vicuna
26
+ <br>Alpaca
27
+
28
+ V1 is trained on 50/50 for these two formats.
29
+ <br>I am working on converting to either for v2.
30
+
31
+ Once V2 is Completed, I will also train a 70B variant of this.
32
+
33
+ ***
34
+
35
+ This model was trained from scratch on the Hesperus dataset.
36
+ It achieves the following results on the evaluation set:
37
+ - Loss: 1.5134
38
+
39
+ - ***
40
+
41
+ <br>