boris commited on
Commit
55ae371
1 Parent(s): 2ae5ab8

New model from https://wandb.ai/wandb/huggingtweets/runs/1ws7gnv2

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- thumbnail: https://www.huggingtweets.com/kanyewest/1634702536209/predictions.png
4
  tags:
5
  - huggingtweets
6
  widget:
@@ -42,20 +42,20 @@ The model was trained on tweets from ye.
42
 
43
  | Data | ye |
44
  | --- | --- |
45
- | Tweets downloaded | 1856 |
46
- | Retweets | 186 |
47
- | Short tweets | 573 |
48
- | Tweets kept | 1097 |
49
 
50
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bvnrjbxn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kanyewest's tweets.
55
 
56
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2nxhg2su) for full transparency and reproducibility.
57
 
58
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2nxhg2su/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
 
1
  ---
2
  language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
  tags:
5
  - huggingtweets
6
  widget:
 
42
 
43
  | Data | ye |
44
  | --- | --- |
45
+ | Tweets downloaded | 1867 |
46
+ | Retweets | 192 |
47
+ | Short tweets | 575 |
48
+ | Tweets kept | 1100 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/14hpraax/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kanyewest's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ws7gnv2) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ws7gnv2/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
config.json CHANGED
@@ -17,7 +17,9 @@
17
  "n_inner": null,
18
  "n_layer": 12,
19
  "n_positions": 1024,
 
20
  "resid_pdrop": 0.1,
 
21
  "scale_attn_weights": true,
22
  "summary_activation": null,
23
  "summary_first_dropout": 0.1,
@@ -35,7 +37,7 @@
35
  }
36
  },
37
  "torch_dtype": "float32",
38
- "transformers_version": "4.11.3",
39
  "use_cache": true,
40
  "vocab_size": 50257
41
  }
 
17
  "n_inner": null,
18
  "n_layer": 12,
19
  "n_positions": 1024,
20
+ "reorder_and_upcast_attn": false,
21
  "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
  "scale_attn_weights": true,
24
  "summary_activation": null,
25
  "summary_first_dropout": 0.1,
 
37
  }
38
  },
39
  "torch_dtype": "float32",
40
+ "transformers_version": "4.23.1",
41
  "use_cache": true,
42
  "vocab_size": 50257
43
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9f5889ef6a6a917e4dcd427b1dd95910d04cdcc6eea8c9f9815acfbdf7810582
3
- size 510403817
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a005ed9d70bee95857616a13fc5f1438e80d66ba21e3e1d067f0f3f2f00acabf
3
+ size 510396521
special_tokens_map.json CHANGED
@@ -1 +1,5 @@
1
- {"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -1 +1,10 @@
1
- {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2", "tokenizer_class": "GPT2Tokenizer"}
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "eos_token": "<|endoftext|>",
5
+ "model_max_length": 1024,
6
+ "name_or_path": "gpt2",
7
+ "special_tokens_map_file": null,
8
+ "tokenizer_class": "GPT2Tokenizer",
9
+ "unk_token": "<|endoftext|>"
10
+ }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1dd02eb888f3e655621944bc1e2cdfc5ef73d54c6bd4713dca95b230da85cdf4
3
- size 2863
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e077dba13fff7de51059311109b992316ceeec32f0ca23f1f25aedc8f537799d
3
+ size 3375