TristanThrush commited on
Commit
0fbd451
1 Parent(s): 644d132

added dec gpt2

Browse files
Files changed (27) hide show
  1. README.md +94 -0
  2. config.json +39 -0
  3. logs/1672680026.553644/events.out.tfevents.1672680026.tristan-olm-training-a100-80.97506.1 +3 -0
  4. logs/1672680410.3099709/events.out.tfevents.1672680410.tristan-olm-training-a100-80.101195.1 +3 -0
  5. logs/1672680510.7320256/events.out.tfevents.1672680510.tristan-olm-training-a100-80.104556.1 +3 -0
  6. logs/1672680710.2332163/events.out.tfevents.1672680710.tristan-olm-training-a100-80.107943.1 +3 -0
  7. logs/1672681285.8309555/events.out.tfevents.1672681285.tristan-olm-training-a100-80.111576.1 +3 -0
  8. logs/1672681495.1712687/events.out.tfevents.1672681495.tristan-olm-training-a100-80.115679.1 +3 -0
  9. logs/1672681775.8689537/events.out.tfevents.1672681775.tristan-olm-training-a100-80.119314.1 +3 -0
  10. logs/1672682182.0067658/events.out.tfevents.1672682182.tristan-olm-training-a100-80.123038.1 +3 -0
  11. logs/1672705969.2600806/events.out.tfevents.1672705969.tristan-olm-training-a100-80.138319.1 +3 -0
  12. logs/events.out.tfevents.1672680026.tristan-olm-training-a100-80.97506.0 +3 -0
  13. logs/events.out.tfevents.1672680410.tristan-olm-training-a100-80.101195.0 +3 -0
  14. logs/events.out.tfevents.1672680510.tristan-olm-training-a100-80.104556.0 +3 -0
  15. logs/events.out.tfevents.1672680710.tristan-olm-training-a100-80.107943.0 +3 -0
  16. logs/events.out.tfevents.1672681285.tristan-olm-training-a100-80.111576.0 +3 -0
  17. logs/events.out.tfevents.1672681495.tristan-olm-training-a100-80.115679.0 +3 -0
  18. logs/events.out.tfevents.1672681775.tristan-olm-training-a100-80.119314.0 +3 -0
  19. logs/events.out.tfevents.1672682181.tristan-olm-training-a100-80.123038.0 +3 -0
  20. logs/events.out.tfevents.1672705969.tristan-olm-training-a100-80.138319.0 +3 -0
  21. merges.txt +0 -0
  22. pytorch_model.bin +3 -0
  23. special_tokens_map.json +15 -0
  24. tokenizer.json +0 -0
  25. tokenizer_config.json +23 -0
  26. training_args.bin +3 -0
  27. vocab.json +0 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - exbert
5
+
6
+ ---
7
+
8
+
9
+ # OLM GPT-2 December 2022
10
+
11
+ This is a more up-to-date version of the [original GPT-2](https://huggingface.co/gpt2).
12
+ In addition to being more up-to-date, it also tends to perform better than the original GPT2 on standard benchmarks.
13
+ It was trained on a cleaned December 2022 snapshot of Common Crawl and Wikipedia.
14
+
15
+ This model was created as part of the OLM project, which has the goal of continuously training and releasing models that are up-to-date and comparable in standard language model performance to their static counterparts.
16
+ This is important because we want our models to know about events like COVID or
17
+ a presidential election right after they happen.
18
+
19
+ ## Intended uses
20
+
21
+ You can use the raw model for text generation or fine-tune it to a downstream task.
22
+
23
+ ## How to use
24
+
25
+ You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
26
+ set a seed for reproducibility:
27
+
28
+ ```python
29
+ >>> from transformers import pipeline, set_seed
30
+ >>> # It is important to include the bad_words_ids=[[0,2]] if you want this model to stay on topic.
31
+ >>> # Otherwise, the model may generate start and end tokens followed by text that is not relevant to
32
+ >>> # the previous text.
33
+ >>> generator = pipeline('text-generation', model='olm/olm-gpt2-dec-2022', bad_words_ids=[[0,2]])
34
+ >>> set_seed(42)
35
+ >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
36
+ [{'generated_text': "Hello, I'm a language model, but you want to know if I have a language in that language. Is this possible? Please explain"},
37
+ {'generated_text': "Hello, I'm a language model, and here's some useful news for you all: The C++ API is becoming more and more popular for"},
38
+ {'generated_text': "Hello, I'm a language model, I'm not trying to learn or understand a new tool, my job is to be as happy as"},
39
+ {'generated_text': "Hello, I'm a language model, a language analyst, and a language system designer. I'm just a curious guy.\n"},
40
+ {'generated_text': "Hello, I'm a language model, I'm not doing anything that needs to be done for the current time (or previous)."}]
41
+ ```
42
+
43
+ Here is how to use this model to get the features of a given text in PyTorch:
44
+
45
+ ```python
46
+ from transformers import AutoTokenizer, AutoModelForCausalLM
47
+ tokenizer = AutoTokenizer.from_pretrained('olm/olm-gpt2-dec-2022')
48
+ model = AutoModelForCausalLM.from_pretrained('olm/olm-gpt2-dec-2022')
49
+ text = "Replace me by any text you'd like."
50
+ encoded_input = tokenizer(text, return_tensors='pt')
51
+ output = model(**encoded_input)
52
+ ```
53
+
54
+ ## Dataset
55
+
56
+ The model and tokenizer were trained with this [December 2022 cleaned Common Crawl dataset](https://huggingface.co/datasets/olm/olm-CC-MAIN-2022-49-sampling-ratio-olm-0.15114822547) plus this [December 2022 cleaned Wikipedia dataset](https://huggingface.co/datasets/olm/olm-wikipedia-20221220).\
57
+ The tokenized version of these concatenated datasets is [here](https://huggingface.co/datasets/olm/olm-december-2022-tokenized-1024).\
58
+ The datasets were created with this [repo](https://github.com/huggingface/olm-datasets).
59
+
60
+ ## Training
61
+
62
+ The model was trained according to the OLM GPT2 instructions at this [repo](https://github.com/huggingface/olm-training).
63
+
64
+ ## Evaluation results
65
+
66
+ The model achieves the following results without any fine-tuning (zero-shot):
67
+
68
+ | Task | Metric | Original GPT2 | OLM GPT2 Dec 2022 (Ours) | Significance of Difference (two-tailed p-value) |
69
+ |:------------|:-----------|--------------------:|-------------------------:|----------------------------------:|
70
+ |rte |acc |0.5307 |0.5199 |0.7184 |
71
+ |piqa |acc/acc_norm|0.6289/0.6251 |**0.6692**/**0.6665** |**0.0004**/**0.0003** |
72
+ |copa |acc |0.6400 |0.6800 |0.4070 |
73
+ |record |f1/em |**0.7094**/**0.7026**|0.6884/0.6818 |**0.0000**/**0.0000** |
74
+ |boolq |acc |0.4872 |**0.6021** |**0.0000** |
75
+ |cb |acc/f1 |0.4107/0.2619 |0.3393/0.1840 |0.2816/NA |
76
+ |hellaswag |acc/acc_norm|0.2892/0.3114 |**0.3079**/**0.3482** |**0.0000**/**0.0000** |
77
+ |mrpc |acc/f1 |0.5662/0.6911 |**0.6814**/**0.8099** |**0.0000**/**0.0000** |
78
+ |multirc |acc |0.0189 |0.0220 |0.4755 |
79
+ |lambada |ppl/acc |40.0554/0.3256 |**28.3359**/**0.3699** |**0.0000**/**0.0000** |
80
+ |wsc |acc |0.4327 |0.3654 |0.1680 |
81
+ |wic |acc |0.4922 |0.5000 |0.6924 |
82
+ |mnli |acc |0.3372 |**0.3501** |**0.0071** |
83
+ |qnli |acc |0.5017 |0.4946 |0.2913 |
84
+ |cola |mcc |0.0126 |0.0000 |0.6880 |
85
+ |triviaqa |acc |0.0151 |**0.0181** |**0.0088** |
86
+ |winogrande |acc |0.5162 |0.5051 |0.4314 |
87
+ |webqs |acc |0.0030 |**0.0079** |**0.0000** |
88
+ |arc_easy |acc/acc_norm|0.4381/0.3948 |**0.4693**/**0.4230** |**0.0022**/**0.0049** |
89
+ |arc_challenge|acc/acc_norm|0.1903/0.2270 |0.2090/0.2398 |0.1017/0.2957 |
90
+
91
+ To get these results, we used commit `f079e322b857714fcef1ada9e78ddc606fe51e84` of the Eleuther AI evaluation harness [here](https://github.com/EleutherAI/lm-evaluation-harness),
92
+ which can produce results different than those reported in the GPT2 paper.
93
+ We added a change [here](https://github.com/EleutherAI/lm-evaluation-harness/compare/master...mathemakitten:lm-evaluation-harness:master) to enable evaluation of the OLM GPT2, which has a very slightly different vocab size.
94
+ The p-values come from the stderr from the evaluation harness, plus a normal distribution assumption.
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "gpt2",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "initializer_range": 0.02,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_ctx": 1024,
15
+ "n_embd": 768,
16
+ "n_head": 12,
17
+ "n_inner": null,
18
+ "n_layer": 12,
19
+ "n_positions": 1024,
20
+ "reorder_and_upcast_attn": false,
21
+ "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
+ "scale_attn_weights": true,
24
+ "summary_activation": null,
25
+ "summary_first_dropout": 0.1,
26
+ "summary_proj_to_labels": true,
27
+ "summary_type": "cls_index",
28
+ "summary_use_proj": true,
29
+ "task_specific_params": {
30
+ "text-generation": {
31
+ "do_sample": true,
32
+ "max_length": 50
33
+ }
34
+ },
35
+ "torch_dtype": "float32",
36
+ "transformers_version": "4.24.0",
37
+ "use_cache": true,
38
+ "vocab_size": 50265
39
+ }
logs/1672680026.553644/events.out.tfevents.1672680026.tristan-olm-training-a100-80.97506.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d55cada293cf7937c658b03319d262b1fd1eb2af8bf4bd340395b0520397531
3
+ size 5500
logs/1672680410.3099709/events.out.tfevents.1672680410.tristan-olm-training-a100-80.101195.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85710cd20c6303dc637ccd0e516aa8761ac82884da22a39245f18521e8701750
3
+ size 5500
logs/1672680510.7320256/events.out.tfevents.1672680510.tristan-olm-training-a100-80.104556.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6cb00f5120cc48672175bc3e42486f40d63b3f2ff2c862efc69674d1185844f
3
+ size 5500
logs/1672680710.2332163/events.out.tfevents.1672680710.tristan-olm-training-a100-80.107943.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:337045cd794d084162abe53f2e58094502ca24493691feaa08602ec919a7c47c
3
+ size 5495
logs/1672681285.8309555/events.out.tfevents.1672681285.tristan-olm-training-a100-80.111576.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1de7177520ca325b4744bce7bc6eb78f769749bc1c08314286f06abaf73fabb6
3
+ size 5495
logs/1672681495.1712687/events.out.tfevents.1672681495.tristan-olm-training-a100-80.115679.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaa3057c09d085f8c84bbf006a6ea84c78f837e3387666c576bb2f5e25970919
3
+ size 5495
logs/1672681775.8689537/events.out.tfevents.1672681775.tristan-olm-training-a100-80.119314.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86129e43b3d84334b2892400c9cab0525c6d576de0a463b6214ee96f9692c928
3
+ size 5495
logs/1672682182.0067658/events.out.tfevents.1672682182.tristan-olm-training-a100-80.123038.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:feb5479d16803853694f36094e4a3e6915852d16e6403834e481f52225b80d40
3
+ size 5483
logs/1672705969.2600806/events.out.tfevents.1672705969.tristan-olm-training-a100-80.138319.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7044c9e849377b1f8035578063fd93fa61054e87e5d459587d8356964bb7011
3
+ size 5483
logs/events.out.tfevents.1672680026.tristan-olm-training-a100-80.97506.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a4a9ae3520d3eb6c49d8e36ff71e24f171b4e740d9305f001dc1302f799399b
3
+ size 4133
logs/events.out.tfevents.1672680410.tristan-olm-training-a100-80.101195.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8eae39626cef36bf5317d4b114d72e6ce5d8c6bfa020f419ede147c6fab3606
3
+ size 3980
logs/events.out.tfevents.1672680510.tristan-olm-training-a100-80.104556.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed7f17024d92fdaf0594b449450071258bd1a674f3211cdbcb53b3bcc625f31d
3
+ size 3980
logs/events.out.tfevents.1672680710.tristan-olm-training-a100-80.107943.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f9ce39d22ac949f9815fc93dd0a30d7b3b708e7cad2f5cdc629498c5a7b79c0
3
+ size 4128
logs/events.out.tfevents.1672681285.tristan-olm-training-a100-80.111576.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f81d6a5b9483b8a84cb8d49b66d019389dd70609cb94a0c0effa2e8c156ef87f
3
+ size 3974
logs/events.out.tfevents.1672681495.tristan-olm-training-a100-80.115679.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f12719bff2c5cbc76ff8445d8f99867d2bcf8e16af08e20029d3a73f1eeef6a
3
+ size 3974
logs/events.out.tfevents.1672681775.tristan-olm-training-a100-80.119314.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f5e660e61ca28c40e55515e892f81a60e57ecd378cc4dd27def2bf91e4572fa
3
+ size 3974
logs/events.out.tfevents.1672682181.tristan-olm-training-a100-80.123038.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98e328b81155b5cd8429bf8b81ba9f931e2e58080fd765bd5f38e3f00e154b6b
3
+ size 19814
logs/events.out.tfevents.1672705969.tristan-olm-training-a100-80.138319.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:161c06a3dec95b6b1fbce8de14e64212e0dd6760bd4a979ef692cc8c5f325bf3
3
+ size 931468
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b98f5fcd658301c54ac15222432c57245582b290328806b99225b1460eda4ba4
3
+ size 510422589
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<s>",
4
+ "cls_token": "<s>",
5
+ "eos_token": "</s>",
6
+ "errors": "replace",
7
+ "mask_token": {
8
+ "__type": "AddedToken",
9
+ "content": "<mask>",
10
+ "lstrip": true,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "model_max_length": 1024,
16
+ "name_or_path": "Tristan/olm-tokenizer",
17
+ "pad_token": "<pad>",
18
+ "sep_token": "</s>",
19
+ "special_tokens_map_file": null,
20
+ "tokenizer_class": "RobertaTokenizer",
21
+ "trim_offsets": true,
22
+ "unk_token": "<unk>"
23
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01df519a89a223188624b16e2f3b882b77a5f58fe4d896d0f81d46e593acf45b
3
+ size 3387
vocab.json ADDED
The diff for this file is too large to render. See raw diff