AlekseyKorshuk commited on
Commit
3b824c0
1 Parent(s): 752448c

huggingartists

Browse files
README.md CHANGED
@@ -45,15 +45,15 @@ from datasets import load_dataset
45
  dataset = load_dataset("huggingartists/lumen")
46
  ```
47
 
48
- [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1hxpnibf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
49
 
50
  ## Training procedure
51
 
52
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lumen's lyrics.
53
 
54
- Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/11ncomfi) for full transparency and reproducibility.
55
 
56
- At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/11ncomfi/artifacts) is logged and versioned.
57
 
58
  ## How to use
59
 
 
45
  dataset = load_dataset("huggingartists/lumen")
46
  ```
47
 
48
+ [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2fkqbnvl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
49
 
50
  ## Training procedure
51
 
52
  The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Lumen's lyrics.
53
 
54
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1vhfm4ch) for full transparency and reproducibility.
55
 
56
+ At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1vhfm4ch/artifacts) is logged and versioned.
57
 
58
  ## How to use
59
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "gpt2",
3
  "activation_function": "gelu_new",
4
  "architectures": [
5
  "GPT2LMHeadModel"
 
1
  {
2
+ "_name_or_path": "huggingartists/lumen",
3
  "activation_function": "gelu_new",
4
  "architectures": [
5
  "GPT2LMHeadModel"
flax_model.msgpack CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:24bce015b841fe91c6a5ec59b9623150ddb4877423bc119669ed2088f6d5e53f
3
  size 497764120
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e8b6a2357b8c0c16257421f2c0fba3c971b9777e7d78bdff29610d54b5b76ba
3
  size 497764120
optimizer.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f9541999d7ea6d5a08738b1fa289cd88bf548104dee1e2bbd3184a40167712c
3
- size 995603825
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e9898a0a1bde275fb2882c866f4dfdf5e054bced69f47a08cfb4cfd8e74d0e9
3
+ size 995604017
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b76c1de3094d8bdc8f7cdbd1fedc0581208e87ec6fe2a171c7d8b2c50cbcfae0
3
  size 510403817
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63b4ebe6bcfbdbc5aa6664d0c39a795375ddb25e557f2fea13e8c2c27d82a82d
3
  size 510403817
rng_state.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:17254bf529ba46aef4cea76fa56cf795c0c66a72f1bda6e297a6d1ffdb2d40f3
3
  size 14503
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb4258e1766b1398e4cece0003b67990922f1c5d69a8ccdf31042a8302fc0f05
3
  size 14503
scheduler.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:df492cdf7e8ff140992fcd1abb42fff766903919e3a81acd0ab45f185e6a5fed
3
  size 623
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db294744192683dcff6f79c2449005a1595ab3e56c5db5258f43460e1f33b045
3
  size 623
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2", "tokenizer_class": "GPT2Tokenizer"}
 
1
+ {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "huggingartists/lumen", "tokenizer_class": "GPT2Tokenizer"}
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:030f3c2d633a8c13a4b2916441912a5476b978dc7af4e439ba7bdd1d16c18147
3
  size 2671
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc0d983d9fc5000796e861a3da4e2ffa047a060ea0f5c68e213ace589564873b
3
  size 2671