Update README.md
Browse files
README.md
CHANGED
@@ -14,10 +14,23 @@ The resulting model achieves a puplexity of 339.38, making it competative with C
|
|
14 |
|
15 |
(metric explanation here: https://twitter.com/aicrumb/status/1650350363898265601 , tldr it's a joke but only kind of)
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
### Model description
|
20 |
|
|
|
|
|
21 |
GPT-2 is a transformer model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
|
22 |
|
23 |
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
|
|
|
14 |
|
15 |
(metric explanation here: https://twitter.com/aicrumb/status/1650350363898265601 , tldr it's a joke but only kind of)
|
16 |
|
17 |
+
### Evaluation of GPT2023
|
18 |
+
|
19 |
+
| model | piqa acc | winogrande acc | lambada ppl | lambada acc | arc acc |
|
20 |
+
| --- | --- | --- | --- | --- | --- |
|
21 |
+
| pythia-70m | 59.85 | 51.22 | 140.81 | 21.40 | 17.15 |
|
22 |
+
| pythia-160m | 62.68 | 51.07 | 30.03 | 36.76 | 19.62 |
|
23 |
+
| pythia-410m | 66.54 | 52.24 | 11.75 | 49.93 | 21.67 |
|
24 |
+
| opt-125m | 63.00 | 50.27 | 26.02 | 37.90 | 18.94 |
|
25 |
+
| --- | --- | --- | --- | --- | --- |
|
26 |
+
| gpt2 (124m) | **62.89** | **51.61** | 40.06 | 32.56 | **19.03** |
|
27 |
+
| **gpt2023** (124m) | 62.02 | 49.64 | **34.55** | **33.98** | 18.94 |
|
28 |
+
|
29 |
|
30 |
### Model description
|
31 |
|
32 |
+
*(from GPT-2 model card)*
|
33 |
+
|
34 |
GPT-2 is a transformer model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
|
35 |
|
36 |
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
|