boris commited on
Commit
828c052
1 Parent(s): a77c5bc

New model from https://wandb.ai/wandb/huggingtweets/runs/3ufcisio

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- thumbnail: https://www.huggingtweets.com/nicolasmaduro/1611140964301/predictions.png
4
  tags:
5
  - huggingtweets
6
  widget:
@@ -51,7 +51,7 @@ The model was trained on [@nicolasmaduro's tweets](https://twitter.com/nicolasma
51
  <tbody style='border-width:0'>
52
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
53
  <td style='border-width:0'>Tweets downloaded</td>
54
- <td style='border-width:0'>3205</td>
55
  </tr>
56
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
57
  <td style='border-width:0'>Retweets</td>
@@ -63,20 +63,20 @@ The model was trained on [@nicolasmaduro's tweets](https://twitter.com/nicolasma
63
  </tr>
64
  <tr style='border-width:0'>
65
  <td style='border-width:0'>Tweets kept</td>
66
- <td style='border-width:0'>1079</td>
67
  </tr>
68
  </tbody>
69
  </table>
70
 
71
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vqvh040/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
72
 
73
  ## Training procedure
74
 
75
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nicolasmaduro's tweets.
76
 
77
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/gplp5oxb) for full transparency and reproducibility.
78
 
79
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/gplp5oxb/artifacts) is logged and versioned.
80
 
81
  ## Intended uses & limitations
82
 
 
1
  ---
2
  language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
  tags:
5
  - huggingtweets
6
  widget:
 
51
  <tbody style='border-width:0'>
52
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
53
  <td style='border-width:0'>Tweets downloaded</td>
54
+ <td style='border-width:0'>3207</td>
55
  </tr>
56
  <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
57
  <td style='border-width:0'>Retweets</td>
 
63
  </tr>
64
  <tr style='border-width:0'>
65
  <td style='border-width:0'>Tweets kept</td>
66
+ <td style='border-width:0'>1081</td>
67
  </tr>
68
  </tbody>
69
  </table>
70
 
71
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ne8h0ti/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
72
 
73
  ## Training procedure
74
 
75
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nicolasmaduro's tweets.
76
 
77
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ufcisio) for full transparency and reproducibility.
78
 
79
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ufcisio/artifacts) is logged and versioned.
80
 
81
  ## Intended uses & limitations
82
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "gpt2",
3
  "activation_function": "gelu_new",
4
  "architectures": [
5
  "GPT2LMHeadModel"
 
1
  {
2
+ "_name_or_path": "datificate/gpt2-small-spanish",
3
  "activation_function": "gelu_new",
4
  "architectures": [
5
  "GPT2LMHeadModel"
merges.txt CHANGED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d9c6c75244e859a9d77f60bdf304ee6b1712b84b484d3338a585427ddbb6cf86
3
- size 510406559
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:992a627f91600f207692cc503e4dd4655ec4dfae72ad7fd25aacd464d20ee859
3
+ size 510406770
special_tokens_map.json CHANGED
@@ -1 +1 @@
1
- {"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
 
1
+ {"bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": "<|endoftext|>"}
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "name_or_path": "gpt2"}
 
1
+ {"unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "pad_token": "<|endoftext|>", "special_tokens_map_file": null, "name_or_path": "datificate/gpt2-small-spanish", "errors": "replace"}
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0c4abd714304a59bd6684fb5a69ec81a485232fed1ef21fdf32e8b3e4ff5157b
3
  size 2031
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b4a6cf53563b996d48c8f9879e21d29a644c7e28653cbf690edb01377ce49cb
3
  size 2031
vocab.json CHANGED
The diff for this file is too large to render. See raw diff