boris commited on
Commit
19ea8c7
1 Parent(s): 83dc820

New model from https://wandb.ai/wandb/huggingtweets/runs/v73gszw3

Browse files
README.md CHANGED
@@ -7,11 +7,21 @@ widget:
7
  - text: "My dream is"
8
  ---
9
 
10
- <div>
11
- <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1348065490405617665/0xedqEt-_400x400.jpg')">
12
- </div>
13
- <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Leffen 🤖 AI Bot </div>
14
- <div style="font-size: 15px">@tsm_leffen bot</div>
 
 
 
 
 
 
 
 
 
 
15
  </div>
16
 
17
  I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
@@ -24,28 +34,28 @@ The model uses the following pipeline.
24
 
25
  ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
26
 
27
- To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
28
 
29
  ## Training data
30
 
31
- The model was trained on [@tsm_leffen's tweets](https://twitter.com/tsm_leffen).
32
 
33
- | Data | Quantity |
34
  | --- | --- |
35
- | Tweets downloaded | 3248 |
36
- | Retweets | 319 |
37
- | Short tweets | 237 |
38
- | Tweets kept | 2692 |
39
 
40
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1v3zmq78/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
41
 
42
  ## Training procedure
43
 
44
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tsm_leffen's tweets.
45
 
46
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2b45dbho) for full transparency and reproducibility.
47
 
48
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2b45dbho/artifacts) is logged and versioned.
49
 
50
  ## How to use
51
 
 
7
  - text: "My dream is"
8
  ---
9
 
10
+ <div class="inline-flex flex-col" style="line-height: 1.5;">
11
+ <div class="flex">
12
+ <div
13
+ style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1605643724356259847/_D2EGnon_400x400.jpg&#39;)">
14
+ </div>
15
+ <div
16
+ style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)">
17
+ </div>
18
+ <div
19
+ style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)">
20
+ </div>
21
+ </div>
22
+ <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
23
+ <div style="text-align: center; font-size: 16px; font-weight: 800">TSM Leffen</div>
24
+ <div style="text-align: center; font-size: 14px;">@tsm_leffen</div>
25
  </div>
26
 
27
  I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
 
34
 
35
  ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
36
 
37
+ To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
38
 
39
  ## Training data
40
 
41
+ The model was trained on tweets from TSM Leffen.
42
 
43
+ | Data | TSM Leffen |
44
  | --- | --- |
45
+ | Tweets downloaded | 3245 |
46
+ | Retweets | 260 |
47
+ | Short tweets | 383 |
48
+ | Tweets kept | 2602 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/uutnode8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tsm_leffen's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/v73gszw3) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/v73gszw3/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
config.json CHANGED
@@ -8,7 +8,6 @@
8
  "bos_token_id": 50256,
9
  "embd_pdrop": 0.1,
10
  "eos_token_id": 50256,
11
- "gradient_checkpointing": false,
12
  "initializer_range": 0.02,
13
  "layer_norm_epsilon": 1e-05,
14
  "model_type": "gpt2",
@@ -18,7 +17,10 @@
18
  "n_inner": null,
19
  "n_layer": 12,
20
  "n_positions": 1024,
 
21
  "resid_pdrop": 0.1,
 
 
22
  "summary_activation": null,
23
  "summary_first_dropout": 0.1,
24
  "summary_proj_to_labels": true,
@@ -34,7 +36,8 @@
34
  "top_p": 0.95
35
  }
36
  },
37
- "transformers_version": "4.3.3",
 
38
  "use_cache": true,
39
  "vocab_size": 50257
40
  }
 
8
  "bos_token_id": 50256,
9
  "embd_pdrop": 0.1,
10
  "eos_token_id": 50256,
 
11
  "initializer_range": 0.02,
12
  "layer_norm_epsilon": 1e-05,
13
  "model_type": "gpt2",
 
17
  "n_inner": null,
18
  "n_layer": 12,
19
  "n_positions": 1024,
20
+ "reorder_and_upcast_attn": false,
21
  "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
+ "scale_attn_weights": true,
24
  "summary_activation": null,
25
  "summary_first_dropout": 0.1,
26
  "summary_proj_to_labels": true,
 
36
  "top_p": 0.95
37
  }
38
  },
39
+ "torch_dtype": "float32",
40
+ "transformers_version": "4.28.1",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
flax_model.msgpack DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8b32b769a6f39b1322805fc5184f5c19b4ebd8ad5799ca7e4d0dc1b0a0fbb073
3
- size 497764120
 
 
 
 
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.28.1"
6
+ }
merges.txt CHANGED
@@ -1,4 +1,4 @@
1
- #version: 0.2 - Trained by `huggingface/tokenizers`
2
  Ġ t
3
  Ġ a
4
  h e
 
1
+ #version: 0.2
2
  Ġ t
3
  Ġ a
4
  h e
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:efac57ed53b0529f6009ab1d79e74ae4d1abd49ea841bc75abbda2e19ad7269b
3
- size 510408315
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:588ea95ba17ac8c64e4d4330f57d33d00941a00eccd97478ac2e52324c810f8a
3
+ size 510398013
special_tokens_map.json CHANGED
@@ -1 +1,5 @@
1
- {"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -1 +1,9 @@
1
- {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "name_or_path": "gpt2"}
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "eos_token": "<|endoftext|>",
6
+ "model_max_length": 1024,
7
+ "tokenizer_class": "GPT2Tokenizer",
8
+ "unk_token": "<|endoftext|>"
9
+ }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a1c261c1374bf953cdd9d1cdf79c265215624dbe19ce6abace1df3ced7db9168
3
- size 2159
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7746c5b0e5eb8c88230ad1e6624ba4d8e005ddc1af7cd307051c44f81fd89d5
3
+ size 3579