boris commited on
Commit
3d801fb
1 Parent(s): c4bb092

New model from https://wandb.ai/wandb/huggingtweets/runs/3my4azzd

Browse files
Files changed (5) hide show
  1. README.md +8 -8
  2. config.json +1 -1
  3. pytorch_model.bin +2 -2
  4. tokenizer.json +6 -3
  5. training_args.bin +2 -2
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- thumbnail: http://www.huggingtweets.com/temapex/1647273355190/predictions.png
4
  tags:
5
  - huggingtweets
6
  widget:
@@ -10,7 +10,7 @@ widget:
10
  <div class="inline-flex flex-col" style="line-height: 1.5;">
11
  <div class="flex">
12
  <div
13
- style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1454892329115045888/M5-Boq34_400x400.jpg&#39;)">
14
  </div>
15
  <div
16
  style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)">
@@ -43,19 +43,19 @@ The model was trained on tweets from Ema Pex 🌠 ペクスえま.
43
  | Data | Ema Pex 🌠 ペクスえま |
44
  | --- | --- |
45
  | Tweets downloaded | 3245 |
46
- | Retweets | 490 |
47
- | Short tweets | 262 |
48
- | Tweets kept | 2493 |
49
 
50
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27plk4jv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @temapex's tweets.
55
 
56
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lmh5nds) for full transparency and reproducibility.
57
 
58
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lmh5nds/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
 
1
  ---
2
  language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
  tags:
5
  - huggingtweets
6
  widget:
 
10
  <div class="inline-flex flex-col" style="line-height: 1.5;">
11
  <div class="flex">
12
  <div
13
+ style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1511150115582525442/9l-weW8Z_400x400.jpg&#39;)">
14
  </div>
15
  <div
16
  style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)">
 
43
  | Data | Ema Pex 🌠 ペクスえま |
44
  | --- | --- |
45
  | Tweets downloaded | 3245 |
46
+ | Retweets | 446 |
47
+ | Short tweets | 259 |
48
+ | Tweets kept | 2540 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qyw32m2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @temapex's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3my4azzd) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3my4azzd/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
config.json CHANGED
@@ -37,7 +37,7 @@
37
  }
38
  },
39
  "torch_dtype": "float32",
40
- "transformers_version": "4.17.0",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
 
37
  }
38
  },
39
  "torch_dtype": "float32",
40
+ "transformers_version": "4.18.0",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6864e85f78e52d23f0f30b53356b0a5c2a6cf4960778552224098ad27e64146
3
- size 510404393
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f461cf6112813439feabe7bc688eaa195adedc0eeb214b2f76a734066b9fbbc
3
+ size 510396521
tokenizer.json CHANGED
@@ -17,17 +17,20 @@
17
  "pre_tokenizer": {
18
  "type": "ByteLevel",
19
  "add_prefix_space": false,
20
- "trim_offsets": true
 
21
  },
22
  "post_processor": {
23
  "type": "ByteLevel",
24
  "add_prefix_space": true,
25
- "trim_offsets": false
 
26
  },
27
  "decoder": {
28
  "type": "ByteLevel",
29
  "add_prefix_space": true,
30
- "trim_offsets": true
 
31
  },
32
  "model": {
33
  "type": "BPE",
 
17
  "pre_tokenizer": {
18
  "type": "ByteLevel",
19
  "add_prefix_space": false,
20
+ "trim_offsets": true,
21
+ "use_regex": true
22
  },
23
  "post_processor": {
24
  "type": "ByteLevel",
25
  "add_prefix_space": true,
26
+ "trim_offsets": false,
27
+ "use_regex": true
28
  },
29
  "decoder": {
30
  "type": "ByteLevel",
31
  "add_prefix_space": true,
32
+ "trim_offsets": true,
33
+ "use_regex": true
34
  },
35
  "model": {
36
  "type": "BPE",
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:86a2b24f18f52d8256fa1077d92cc4aba92bdf4a3171912c074a675f6307656a
3
- size 2991
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:629a38dee70e1da4791e01a4ff09710681ba562b6af369b51e2916cd804f6fd9
3
+ size 3055