Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
@@ -46,7 +46,7 @@ GPT-2 can contain many biases and factual inaccuracies, however, given the model
|
|
46 |
|
47 |
For training this model, I use the GPT Wiki Intro dataset, which consists of Wikipedia intro paragraphs written by GPT-3.
|
48 |
The Curie model variant is used to generate intro paragraphs given the title of the article and the first 7 words as a prompt.
|
49 |
-
150k topics are present in the dataset
|
50 |
Each LLM-written paragraph is paired with the human written version; having paired examples from the same domain is why I chose this dataset.
|
51 |
For computational efficiency, I use only 4000 topics for training and 1000 for testing.
|
52 |
|
|
|
46 |
|
47 |
For training this model, I use the GPT Wiki Intro dataset, which consists of Wikipedia intro paragraphs written by GPT-3.
|
48 |
The Curie model variant is used to generate intro paragraphs given the title of the article and the first 7 words as a prompt.
|
49 |
+
150k topics are present in the dataset; the creators did not share how these topics were selected.
|
50 |
Each LLM-written paragraph is paired with the human written version; having paired examples from the same domain is why I chose this dataset.
|
51 |
For computational efficiency, I use only 4000 topics for training and 1000 for testing.
|
52 |
|