New model from https://wandb.ai/wandb/huggingtweets/runs/3ufcisio
Browse files- README.md +6 -6
- config.json +1 -1
- merges.txt +0 -0
- pytorch_model.bin +2 -2
- special_tokens_map.json +1 -1
- tokenizer_config.json +1 -1
- training_args.bin +1 -1
- vocab.json +0 -0
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
language: en
|
3 |
-
thumbnail: https://
|
4 |
tags:
|
5 |
- huggingtweets
|
6 |
widget:
|
@@ -51,7 +51,7 @@ The model was trained on [@nicolasmaduro's tweets](https://twitter.com/nicolasma
|
|
51 |
<tbody style='border-width:0'>
|
52 |
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
|
53 |
<td style='border-width:0'>Tweets downloaded</td>
|
54 |
-
<td style='border-width:0'>
|
55 |
</tr>
|
56 |
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
|
57 |
<td style='border-width:0'>Retweets</td>
|
@@ -63,20 +63,20 @@ The model was trained on [@nicolasmaduro's tweets](https://twitter.com/nicolasma
|
|
63 |
</tr>
|
64 |
<tr style='border-width:0'>
|
65 |
<td style='border-width:0'>Tweets kept</td>
|
66 |
-
<td style='border-width:0'>
|
67 |
</tr>
|
68 |
</tbody>
|
69 |
</table>
|
70 |
|
71 |
-
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/
|
72 |
|
73 |
## Training procedure
|
74 |
|
75 |
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nicolasmaduro's tweets.
|
76 |
|
77 |
-
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/
|
78 |
|
79 |
-
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/
|
80 |
|
81 |
## Intended uses & limitations
|
82 |
|
|
|
1 |
---
|
2 |
language: en
|
3 |
+
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
|
4 |
tags:
|
5 |
- huggingtweets
|
6 |
widget:
|
|
|
51 |
<tbody style='border-width:0'>
|
52 |
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
|
53 |
<td style='border-width:0'>Tweets downloaded</td>
|
54 |
+
<td style='border-width:0'>3207</td>
|
55 |
</tr>
|
56 |
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
|
57 |
<td style='border-width:0'>Retweets</td>
|
|
|
63 |
</tr>
|
64 |
<tr style='border-width:0'>
|
65 |
<td style='border-width:0'>Tweets kept</td>
|
66 |
+
<td style='border-width:0'>1081</td>
|
67 |
</tr>
|
68 |
</tbody>
|
69 |
</table>
|
70 |
|
71 |
+
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ne8h0ti/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
|
72 |
|
73 |
## Training procedure
|
74 |
|
75 |
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nicolasmaduro's tweets.
|
76 |
|
77 |
+
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ufcisio) for full transparency and reproducibility.
|
78 |
|
79 |
+
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ufcisio/artifacts) is logged and versioned.
|
80 |
|
81 |
## Intended uses & limitations
|
82 |
|
config.json
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"_name_or_path": "gpt2",
|
3 |
"activation_function": "gelu_new",
|
4 |
"architectures": [
|
5 |
"GPT2LMHeadModel"
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "datificate/gpt2-small-spanish",
|
3 |
"activation_function": "gelu_new",
|
4 |
"architectures": [
|
5 |
"GPT2LMHeadModel"
|
merges.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:992a627f91600f207692cc503e4dd4655ec4dfae72ad7fd25aacd464d20ee859
|
3 |
+
size 510406770
|
special_tokens_map.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
|
|
|
1 |
+
{"bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": "<|endoftext|>"}
|
tokenizer_config.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "
|
|
|
1 |
+
{"unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "pad_token": "<|endoftext|>", "special_tokens_map_file": null, "name_or_path": "datificate/gpt2-small-spanish", "errors": "replace"}
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 2031
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3b4a6cf53563b996d48c8f9879e21d29a644c7e28653cbf690edb01377ce49cb
|
3 |
size 2031
|
vocab.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|