crumb
commited on
Commit
•
c71db00
1
Parent(s):
5d69262
Update README.md
Browse files
README.md
CHANGED
@@ -37,20 +37,12 @@ output = model.generate(
|
|
37 |
output = tokenizer.decode(output[0]).replace("[/n]","\n")
|
38 |
print(output)
|
39 |
```
|
40 |
-
This model is a fine-tuned version of gpt2-large on the entirety of Regular Show. It achieves the following results on the evaluation set (The Power, Death Punchies, Do Me a Solid):
|
41 |
-
|
42 |
-
|
43 |
-
More information needed
|
44 |
|
45 |
## Intended uses & limitations
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
## Training and evaluation data
|
50 |
-
|
51 |
-
More information needed
|
52 |
-
|
53 |
-
## Training procedure
|
54 |
|
55 |
### Training hyperparameters
|
56 |
|
|
|
37 |
output = tokenizer.decode(output[0]).replace("[/n]","\n")
|
38 |
print(output)
|
39 |
```
|
40 |
+
This model is a fine-tuned version of gpt2-large on the entirety of Regular Show. It achieves the following results on the evaluation set (The Power, Death Punchies, Do Me a Solid):
|
41 |
+
- Loss: 1.6383
|
|
|
|
|
42 |
|
43 |
## Intended uses & limitations
|
44 |
|
45 |
+
Same as gpt2-large
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
### Training hyperparameters
|
48 |
|