Ba2han commited on
Commit
137a42e
1 Parent(s): 7260750

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -6,6 +6,8 @@ datasets:
6
  - Ba2han/databricks-dolly_rated
7
  - Open-Orca/OpenOrca
8
  ---
 
 
9
  The training dataset consists of 2k (longest) examples from no_robots, reddit_instruct, dolly, OpenOrca plus two other personal datasets.
10
 
11
  Please use with ChatML and the default system message or enter your own. It was trained with various system messages, the one in the config being the default one.
@@ -19,4 +21,8 @@ The model is:
19
 
20
  - Not great with short text both in input and generation.
21
 
22
- The aim is to see how the **"Long is More for Alignment"** paper holds. This is basically a combination of LIMA + LMA. There should be no benchmark contamination as far as I am aware of. Around 70% of the data is from the mentioned datasets. I am happy with how it turned out.
 
 
 
 
 
6
  - Ba2han/databricks-dolly_rated
7
  - Open-Orca/OpenOrca
8
  ---
9
+
10
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6324eabf05bd8a54c6eb1650/xRIRb-57y8tyROdrF4aeI.png)
11
  The training dataset consists of 2k (longest) examples from no_robots, reddit_instruct, dolly, OpenOrca plus two other personal datasets.
12
 
13
  Please use with ChatML and the default system message or enter your own. It was trained with various system messages, the one in the config being the default one.
 
21
 
22
  - Not great with short text both in input and generation.
23
 
24
+ The aim is to see how the **"Long is More for Alignment"** paper holds. This is basically a combination of LIMA + LMA. There should be no benchmark contamination as far as I am aware of. Around 70% of the data is from the mentioned datasets. I am happy with how it turned out.
25
+
26
+
27
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6324eabf05bd8a54c6eb1650/qtvTG0XVdEgr3SE58Dmx-.png)
28
+