Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ model-index:
|
|
20 |
|
21 |
This model was trained on the gutenberg3 dataset using gemma-2-9b-it as the base. The training methods used were ORPO (gently) then SIMPO (less gently).
|
22 |
|
23 |
-
**Gutenberg3** is a new, large dpo dataset containing extracts from 629 public domain fiction novels in the Gutenberg Library. It follows the same format as JonDurbin's original gutenberg set. It includes pairs of texts, where the chosen text is taken directly from a novel from the Gutenberg library, and the rejected text is generated by a language model based on a description of the passage. For this dataset I've used gemma-2-9b-it to generate the rejected texts, the idea being that it should more easily steer the base model away from its normal style (as compared to generating the rejected texts with random/weaker models).
|
24 |
|
25 |
The model writes quite naturally with low amounts of gpt-slop, having inherited some human qualities from the dataset. It writes with more simple, spare prose than the typical over-adjectived LLM writing style.
|
26 |
|
|
|
20 |
|
21 |
This model was trained on the gutenberg3 dataset using gemma-2-9b-it as the base. The training methods used were ORPO (gently) then SIMPO (less gently).
|
22 |
|
23 |
+
[**Gutenberg3**](https://huggingface.co/datasets/sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo) is a new, large dpo dataset containing extracts from 629 public domain fiction novels in the Gutenberg Library. It follows the same format as JonDurbin's original gutenberg set. It includes pairs of texts, where the chosen text is taken directly from a novel from the Gutenberg library, and the rejected text is generated by a language model based on a description of the passage. For this dataset I've used gemma-2-9b-it to generate the rejected texts, the idea being that it should more easily steer the base model away from its normal style (as compared to generating the rejected texts with random/weaker models).
|
24 |
|
25 |
The model writes quite naturally with low amounts of gpt-slop, having inherited some human qualities from the dataset. It writes with more simple, spare prose than the typical over-adjectived LLM writing style.
|
26 |
|