boris commited on
Commit
bf15dae
•
1 Parent(s): 3819699

New model from https://wandb.ai/wandb/huggingtweets/runs/1y440us5

Browse files
Files changed (6) hide show
  1. README.md +11 -11
  2. config.json +4 -2
  3. pytorch_model.bin +2 -2
  4. tokenizer.json +0 -0
  5. tokenizer_config.json +1 -1
  6. training_args.bin +2 -2
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- thumbnail: https://www.huggingtweets.com/hampshireomen/1623293818377/predictions.png
4
  tags:
5
  - huggingtweets
6
  widget:
@@ -20,7 +20,7 @@ widget:
20
  </div>
21
  </div>
22
  <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
23
- <div style="text-align: center; font-size: 16px; font-weight: 800">The Omen</div>
24
  <div style="text-align: center; font-size: 14px;">@hampshireomen</div>
25
  </div>
26
 
@@ -38,24 +38,24 @@ To understand how the model was developed, check the [W&B report](https://wandb.
38
 
39
  ## Training data
40
 
41
- The model was trained on tweets from The Omen.
42
 
43
- | Data | The Omen |
44
  | --- | --- |
45
- | Tweets downloaded | 1061 |
46
- | Retweets | 36 |
47
- | Short tweets | 79 |
48
- | Tweets kept | 946 |
49
 
50
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jvjiew2i/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hampshireomen's tweets.
55
 
56
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pu4rj3u) for full transparency and reproducibility.
57
 
58
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pu4rj3u/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
 
1
  ---
2
  language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
  tags:
5
  - huggingtweets
6
  widget:
 
20
  </div>
21
  </div>
22
  <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
23
+ <div style="text-align: center; font-size: 16px; font-weight: 800">the omen is cringe tbh</div>
24
  <div style="text-align: center; font-size: 14px;">@hampshireomen</div>
25
  </div>
26
 
 
38
 
39
  ## Training data
40
 
41
+ The model was trained on tweets from the omen is cringe tbh.
42
 
43
+ | Data | the omen is cringe tbh |
44
  | --- | --- |
45
+ | Tweets downloaded | 1462 |
46
+ | Retweets | 68 |
47
+ | Short tweets | 109 |
48
+ | Tweets kept | 1285 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1792rc86/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hampshireomen's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y440us5) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y440us5/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
config.json CHANGED
@@ -8,7 +8,6 @@
8
  "bos_token_id": 50256,
9
  "embd_pdrop": 0.1,
10
  "eos_token_id": 50256,
11
- "gradient_checkpointing": false,
12
  "initializer_range": 0.02,
13
  "layer_norm_epsilon": 1e-05,
14
  "model_type": "gpt2",
@@ -18,7 +17,9 @@
18
  "n_inner": null,
19
  "n_layer": 12,
20
  "n_positions": 1024,
 
21
  "resid_pdrop": 0.1,
 
22
  "scale_attn_weights": true,
23
  "summary_activation": null,
24
  "summary_first_dropout": 0.1,
@@ -35,7 +36,8 @@
35
  "top_p": 0.95
36
  }
37
  },
38
- "transformers_version": "4.6.1",
 
39
  "use_cache": true,
40
  "vocab_size": 50257
41
  }
 
8
  "bos_token_id": 50256,
9
  "embd_pdrop": 0.1,
10
  "eos_token_id": 50256,
 
11
  "initializer_range": 0.02,
12
  "layer_norm_epsilon": 1e-05,
13
  "model_type": "gpt2",
 
17
  "n_inner": null,
18
  "n_layer": 12,
19
  "n_positions": 1024,
20
+ "reorder_and_upcast_attn": false,
21
  "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
  "scale_attn_weights": true,
24
  "summary_activation": null,
25
  "summary_first_dropout": 0.1,
 
36
  "top_p": 0.95
37
  }
38
  },
39
+ "torch_dtype": "float32",
40
+ "transformers_version": "4.17.0",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee25a36e97cef46f45374545121c67bfef33e649f502e047f57b1b39ecc792be
3
- size 510408315
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec747725831350b1ec209dc64a81d0acf8c01762bd2862d898ffc2acac05fd42
3
+ size 510404393
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2"}
 
1
+ {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2", "tokenizer_class": "GPT2Tokenizer"}
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:63307ca5e8009919b65c84de5617bd6c6f7695da16b40d5407d7b518fe825fb1
3
- size 2415
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7202869db73d53d48afa8cb785f525366a5f629090224f1c66d022411b60aca3
3
+ size 3055