boris commited on
Commit
8a2d7a5
1 Parent(s): e912b0d

New model from https://wandb.ai/wandb/huggingtweets/runs/2ze5vcu7

Browse files
Files changed (5) hide show
  1. README.md +7 -7
  2. config.json +2 -1
  3. pytorch_model.bin +1 -1
  4. tokenizer_config.json +1 -1
  5. training_args.bin +2 -2
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- thumbnail: https://www.huggingtweets.com/queenofbithynia/1624391224518/predictions.png
4
  tags:
5
  - huggingtweets
6
  widget:
@@ -43,19 +43,19 @@ The model was trained on tweets from the needle-felted head of joyce carol oates
43
  | Data | the needle-felted head of joyce carol oates |
44
  | --- | --- |
45
  | Tweets downloaded | 3250 |
46
- | Retweets | 8 |
47
- | Short tweets | 36 |
48
- | Tweets kept | 3206 |
49
 
50
- [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26v6dy65/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @queenofbithynia's tweets.
55
 
56
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ejqzu4s) for full transparency and reproducibility.
57
 
58
- At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ejqzu4s/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
1
  ---
2
  language: en
3
+ thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
4
  tags:
5
  - huggingtweets
6
  widget:
43
  | Data | the needle-felted head of joyce carol oates |
44
  | --- | --- |
45
  | Tweets downloaded | 3250 |
46
+ | Retweets | 4 |
47
+ | Short tweets | 30 |
48
+ | Tweets kept | 3216 |
49
 
50
+ [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21v0b2yw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
51
 
52
  ## Training procedure
53
 
54
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @queenofbithynia's tweets.
55
 
56
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ze5vcu7) for full transparency and reproducibility.
57
 
58
+ At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ze5vcu7/artifacts) is logged and versioned.
59
 
60
  ## How to use
61
 
config.json CHANGED
@@ -35,7 +35,8 @@
35
  "top_p": 0.95
36
  }
37
  },
38
- "transformers_version": "4.7.0",
 
39
  "use_cache": true,
40
  "vocab_size": 50257
41
  }
35
  "top_p": 0.95
36
  }
37
  },
38
+ "torch_dtype": "float32",
39
+ "transformers_version": "4.10.2",
40
  "use_cache": true,
41
  "vocab_size": 50257
42
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5f5da44f0c398a1daea00840f955d7ddc83a28fba8289775a2003fed0c84f8d3
3
  size 510403817
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3932fd528d802938c2abe76c7adf9ef112b238e32b2022b336fc705d7f760cdc
3
  size 510403817
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2"}
1
+ {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2", "tokenizer_class": "GPT2Tokenizer"}
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:43fb2345f93c4e26010a887d9aa258b5ee554655f9eb708b70b2e44721431eee
3
- size 2479
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1cdc73b5dd683dfccd305ec14050a65c98b105df595b35ebdc54319d4518907
3
+ size 2671