boris commited on
Commit
467590e
1 Parent(s): 34578dc

New model from https://wandb.ai/wandb/huggingtweets/runs/n30kmifp

Browse files
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
+ tags:
5
+ - huggingtweets
6
+ widget:
7
+ - text: "My dream is"
8
+ ---
9
+
10
+ <div>
11
+ <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1351987572747202560/v_vDGtnX_400x400.png')">
12
+ </div>
13
+ <div style="margin-top: 8px; font-size: 19px; font-weight: 800">AtomicNicos | @d_overcon co-organiser 🤖 AI Bot </div>
14
+ <div style="font-size: 15px">@atomicnicos bot</div>
15
+ </div>
16
+
17
+ I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
18
+
19
+ Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
20
+
21
+ ## How does it work?
22
+
23
+ The model uses the following pipeline.
24
+
25
+ ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true)
26
+
27
+ To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
28
+
29
+ ## Training data
30
+
31
+ The model was trained on [@atomicnicos's tweets](https://twitter.com/atomicnicos).
32
+
33
+ | Data | Quantity |
34
+ | --- | --- |
35
+ | Tweets downloaded | 3249 |
36
+ | Retweets | 221 |
37
+ | Short tweets | 452 |
38
+ | Tweets kept | 2576 |
39
+
40
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mnuo591/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
41
+
42
+ ## Training procedure
43
+
44
+ The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atomicnicos's tweets.
45
+
46
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/n30kmifp) for full transparency and reproducibility.
47
+
48
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/n30kmifp/artifacts) is logged and versioned.
49
+
50
+ ## How to use
51
+
52
+ You can use this model directly with a pipeline for text generation:
53
+
54
+ ```python
55
+ from transformers import pipeline
56
+ generator = pipeline('text-generation',
57
+ model='huggingtweets/atomicnicos')
58
+ generator("My dream is", num_return_sequences=5)
59
+ ```
60
+
61
+ ## Limitations and bias
62
+
63
+ The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
64
+
65
+ In addition, the data present in the user's tweets further affects the text generated by the model.
66
+
67
+ ## About
68
+
69
+ *Built by Boris Dayma*
70
+
71
+ [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma)
72
+
73
+ For more details, visit the project repository.
74
+
75
+ [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "gpt2",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "gradient_checkpointing": false,
12
+ "initializer_range": 0.02,
13
+ "layer_norm_epsilon": 1e-05,
14
+ "model_type": "gpt2",
15
+ "n_ctx": 1024,
16
+ "n_embd": 768,
17
+ "n_head": 12,
18
+ "n_inner": null,
19
+ "n_layer": 12,
20
+ "n_positions": 1024,
21
+ "resid_pdrop": 0.1,
22
+ "summary_activation": null,
23
+ "summary_first_dropout": 0.1,
24
+ "summary_proj_to_labels": true,
25
+ "summary_type": "cls_index",
26
+ "summary_use_proj": true,
27
+ "task_specific_params": {
28
+ "text-generation": {
29
+ "do_sample": true,
30
+ "max_length": 160,
31
+ "min_length": 10,
32
+ "prefix": "<|endoftext|>",
33
+ "temperature": 1.0,
34
+ "top_p": 0.95
35
+ }
36
+ },
37
+ "transformers_version": "4.4.2",
38
+ "use_cache": true,
39
+ "vocab_size": 50257
40
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e46899700e830fe906e55f610492f7c0716b4fca16b690eaaab65ff47d28d47f
3
+ size 510408315
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b9252a48700ca352219d2bba0aeeceba96b1f1eebff353c5263c690c0b73d28
3
+ size 2287
vocab.json ADDED
The diff for this file is too large to render. See raw diff