boris commited on
Commit
c6f7f53
1 Parent(s): 8ff9b5b

New model from https://wandb.ai/wandb/huggingtweets/runs/2h617imq

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- thumbnail: https://www.huggingtweets.com/rebeccafiebrink/1606743426222/predictions.png
4
  tags:
5
  - huggingtweets
6
  widget:
@@ -51,11 +51,11 @@ The model was trained on [@rebeccafiebrink's tweets](https://twitter.com/rebecca
51
  <tbody style='border-width:0'>
52
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
53
  <td style='border-width:0'>Tweets downloaded</td>
54
- <td style='border-width:0'>1809</td>
55
  </tr>
56
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
57
  <td style='border-width:0'>Retweets</td>
58
- <td style='border-width:0'>672</td>
59
  </tr>
60
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
61
  <td style='border-width:0'>Short tweets</td>
@@ -68,15 +68,15 @@ The model was trained on [@rebeccafiebrink's tweets](https://twitter.com/rebecca
68
  </tbody>
69
  </table>
70
 
71
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qkninfa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
72
 
73
  ## Training procedure
74
 
75
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rebeccafiebrink's tweets.
76
 
77
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rx4xcp6) for full transparency and reproducibility.
78
 
79
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rx4xcp6/artifacts) is logged and versioned.
80
 
81
  ## Intended uses & limitations
82
 
@@ -110,4 +110,4 @@ For more details, visit the project repository.
110
 
111
  [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
112
 
113
- <!--- random size file -->
 
1
  ---
2
  language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
  tags:
5
  - huggingtweets
6
  widget:
 
51
  <tbody style='border-width:0'>
52
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
53
  <td style='border-width:0'>Tweets downloaded</td>
54
+ <td style='border-width:0'>1810</td>
55
  </tr>
56
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
57
  <td style='border-width:0'>Retweets</td>
58
+ <td style='border-width:0'>673</td>
59
  </tr>
60
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
61
  <td style='border-width:0'>Short tweets</td>
 
68
  </tbody>
69
  </table>
70
 
71
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/262j0sxo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
72
 
73
  ## Training procedure
74
 
75
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rebeccafiebrink's tweets.
76
 
77
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2h617imq) for full transparency and reproducibility.
78
 
79
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2h617imq/artifacts) is logged and versioned.
80
 
81
  ## Intended uses & limitations
82
 
 
110
 
111
  [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
112
 
113
+ <!--- random size file -->
config.json CHANGED
@@ -34,5 +34,6 @@
34
  "top_p": 0.95
35
  }
36
  },
 
37
  "vocab_size": 50257
38
  }
 
34
  "top_p": 0.95
35
  }
36
  },
37
+ "use_cache": true,
38
  "vocab_size": 50257
39
  }
merges.txt CHANGED
@@ -1,4 +1,4 @@
1
- #version: 0.2
2
  Ġ t
3
  Ġ a
4
  h e
 
1
+ #version: 0.2 - Trained by `huggingface/tokenizers`
2
  Ġ t
3
  Ġ a
4
  h e
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c8da4eb8b74dce49b0c1a9b14df7f7c99b6427bd5a65368afd43a0ea8d9b7a70
3
- size 510406554
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:911903303dd6ae2eb6d5d661253b1629a29e99c2f4f8addaa47608871cb4a70d
3
+ size 510406560
special_tokens_map.json CHANGED
@@ -1 +1 @@
1
- {"bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}}
 
1
+ {"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"errors": "replace", "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "model_max_length": 1024, "name_or_path": "gpt2"}
 
1
+ {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "name_or_path": "gpt2"}
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7baa065b06eb327e0ef10c4c23f215cdd886a2a6ec677b0b315ec1ba078caa99
3
- size 1839
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6871a1889b9d07cffaa638551bda0393c76054b4347727f83af5176cccad9732
3
+ size 1775
vocab.json CHANGED
The diff for this file is too large to render. See raw diff