AlekseyKorshuk
commited on
Commit
•
159d75a
1
Parent(s):
55cedcf
huggingartists
Browse files- README.md +97 -0
- config.json +41 -0
- evaluation.txt +1 -0
- flax_model.msgpack +3 -0
- merges.txt +0 -0
- optimizer.pt +3 -0
- pytorch_model.bin +3 -0
- rng_state.pth +3 -0
- scheduler.pt +3 -0
- special_tokens_map.json +1 -0
- tokenizer.json +0 -0
- tokenizer_config.json +1 -0
- trainer_state.json +278 -0
- training_args.bin +3 -0
- vocab.json +0 -0
README.md
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- huggingartists/grimes
|
5 |
+
tags:
|
6 |
+
- huggingartists
|
7 |
+
- lyrics
|
8 |
+
- lm-head
|
9 |
+
- causal-lm
|
10 |
+
widget:
|
11 |
+
- text: "I am"
|
12 |
+
---
|
13 |
+
|
14 |
+
<div class="inline-flex flex-col" style="line-height: 1.5;">
|
15 |
+
<div class="flex">
|
16 |
+
<div
|
17 |
+
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/8dd2a89218346f6bdb326bf84cd9eb49.1000x1000x1.png')">
|
18 |
+
</div>
|
19 |
+
</div>
|
20 |
+
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
|
21 |
+
<div style="text-align: center; font-size: 16px; font-weight: 800">Grimes</div>
|
22 |
+
<a href="https://genius.com/artists/grimes">
|
23 |
+
<div style="text-align: center; font-size: 14px;">@grimes</div>
|
24 |
+
</a>
|
25 |
+
</div>
|
26 |
+
|
27 |
+
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
|
28 |
+
|
29 |
+
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
|
30 |
+
|
31 |
+
## How does it work?
|
32 |
+
|
33 |
+
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
|
34 |
+
|
35 |
+
## Training data
|
36 |
+
|
37 |
+
The model was trained on lyrics from Grimes.
|
38 |
+
|
39 |
+
Dataset is available [here](https://huggingface.co/datasets/huggingartists/grimes).
|
40 |
+
And can be used with:
|
41 |
+
|
42 |
+
```python
|
43 |
+
from datasets import load_dataset
|
44 |
+
|
45 |
+
dataset = load_dataset("huggingartists/grimes")
|
46 |
+
```
|
47 |
+
|
48 |
+
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3796ng30/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
|
49 |
+
|
50 |
+
## Training procedure
|
51 |
+
|
52 |
+
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Grimes's lyrics.
|
53 |
+
|
54 |
+
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/ourv0tjj) for full transparency and reproducibility.
|
55 |
+
|
56 |
+
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/ourv0tjj/artifacts) is logged and versioned.
|
57 |
+
|
58 |
+
## How to use
|
59 |
+
|
60 |
+
You can use this model directly with a pipeline for text generation:
|
61 |
+
|
62 |
+
```python
|
63 |
+
from transformers import pipeline
|
64 |
+
generator = pipeline('text-generation',
|
65 |
+
model='huggingartists/grimes')
|
66 |
+
generator("I am", num_return_sequences=5)
|
67 |
+
```
|
68 |
+
|
69 |
+
Or with Transformers library:
|
70 |
+
|
71 |
+
```python
|
72 |
+
from transformers import AutoTokenizer, AutoModelWithLMHead
|
73 |
+
|
74 |
+
tokenizer = AutoTokenizer.from_pretrained("huggingartists/grimes")
|
75 |
+
|
76 |
+
model = AutoModelWithLMHead.from_pretrained("huggingartists/grimes")
|
77 |
+
```
|
78 |
+
|
79 |
+
## Limitations and bias
|
80 |
+
|
81 |
+
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
|
82 |
+
|
83 |
+
In addition, the data present in the user's tweets further affects the text generated by the model.
|
84 |
+
|
85 |
+
## About
|
86 |
+
|
87 |
+
*Built by Aleksey Korshuk*
|
88 |
+
|
89 |
+
[![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk)
|
90 |
+
|
91 |
+
[![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
|
92 |
+
|
93 |
+
[![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
|
94 |
+
|
95 |
+
For more details, visit the project repository.
|
96 |
+
|
97 |
+
[![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
|
config.json
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "gpt2",
|
3 |
+
"activation_function": "gelu_new",
|
4 |
+
"architectures": [
|
5 |
+
"GPT2LMHeadModel"
|
6 |
+
],
|
7 |
+
"attn_pdrop": 0.1,
|
8 |
+
"bos_token_id": 50256,
|
9 |
+
"embd_pdrop": 0.1,
|
10 |
+
"eos_token_id": 50256,
|
11 |
+
"gradient_checkpointing": false,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"layer_norm_epsilon": 1e-05,
|
14 |
+
"model_type": "gpt2",
|
15 |
+
"n_ctx": 1024,
|
16 |
+
"n_embd": 768,
|
17 |
+
"n_head": 12,
|
18 |
+
"n_inner": null,
|
19 |
+
"n_layer": 12,
|
20 |
+
"n_positions": 1024,
|
21 |
+
"resid_pdrop": 0.1,
|
22 |
+
"scale_attn_weights": true,
|
23 |
+
"summary_activation": null,
|
24 |
+
"summary_first_dropout": 0.1,
|
25 |
+
"summary_proj_to_labels": true,
|
26 |
+
"summary_type": "cls_index",
|
27 |
+
"summary_use_proj": true,
|
28 |
+
"task_specific_params": {
|
29 |
+
"text-generation": {
|
30 |
+
"do_sample": true,
|
31 |
+
"max_length": 200,
|
32 |
+
"min_length": 100,
|
33 |
+
"temperature": 1.0,
|
34 |
+
"top_p": 0.95
|
35 |
+
}
|
36 |
+
},
|
37 |
+
"torch_dtype": "float32",
|
38 |
+
"transformers_version": "4.10.2",
|
39 |
+
"use_cache": true,
|
40 |
+
"vocab_size": 50257
|
41 |
+
}
|
evaluation.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"eval_loss": 2.3570337295532227, "eval_runtime": 1.5602, "eval_samples_per_second": 20.511, "eval_steps_per_second": 2.564, "epoch": 10.0}
|
flax_model.msgpack
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2aa7a10b4616c6bb25095f5a6e6df472447309246c8137f5ba1d49f576f04c7b
|
3 |
+
size 497764120
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
optimizer.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:42e2b33b0f4171ffd2deb0bc2a7a40fae42a0d7cba94c5ddf1fe05c722e79cdd
|
3 |
+
size 995603825
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e604579401dd96dece08c5a340e43a66ab68025cccbf2f52a44c445fb451aad1
|
3 |
+
size 510403817
|
rng_state.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:26ac62fc33ec98b1c486420a3b8565fbb4779f350d60d80fcc0b1802972c47f9
|
3 |
+
size 14567
|
scheduler.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:744fcc656fe1f449c2ea79ccccba224c3e3c1abb31179e9ce51f592461ac2c71
|
3 |
+
size 623
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2", "tokenizer_class": "GPT2Tokenizer"}
|
trainer_state.json
ADDED
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"best_metric": 2.3570337295532227,
|
3 |
+
"best_model_checkpoint": "output/grimes/checkpoint-168",
|
4 |
+
"epoch": 8.0,
|
5 |
+
"global_step": 168,
|
6 |
+
"is_hyper_param_search": false,
|
7 |
+
"is_local_process_zero": true,
|
8 |
+
"is_world_process_zero": true,
|
9 |
+
"log_history": [
|
10 |
+
{
|
11 |
+
"epoch": 0.24,
|
12 |
+
"learning_rate": 0.00011888735840752609,
|
13 |
+
"loss": 2.8697,
|
14 |
+
"step": 5
|
15 |
+
},
|
16 |
+
{
|
17 |
+
"epoch": 0.48,
|
18 |
+
"learning_rate": 7.372648442002871e-05,
|
19 |
+
"loss": 2.7244,
|
20 |
+
"step": 10
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"epoch": 0.71,
|
24 |
+
"learning_rate": 2.5828599592490882e-05,
|
25 |
+
"loss": 2.5533,
|
26 |
+
"step": 15
|
27 |
+
},
|
28 |
+
{
|
29 |
+
"epoch": 0.95,
|
30 |
+
"learning_rate": 7.662053209561833e-07,
|
31 |
+
"loss": 2.4558,
|
32 |
+
"step": 20
|
33 |
+
},
|
34 |
+
{
|
35 |
+
"epoch": 1.0,
|
36 |
+
"eval_loss": 2.552718162536621,
|
37 |
+
"eval_runtime": 1.4059,
|
38 |
+
"eval_samples_per_second": 22.762,
|
39 |
+
"eval_steps_per_second": 2.845,
|
40 |
+
"step": 21
|
41 |
+
},
|
42 |
+
{
|
43 |
+
"epoch": 1.19,
|
44 |
+
"learning_rate": 1.1920020081922749e-05,
|
45 |
+
"loss": 2.436,
|
46 |
+
"step": 25
|
47 |
+
},
|
48 |
+
{
|
49 |
+
"epoch": 1.43,
|
50 |
+
"learning_rate": 5.333506393059682e-05,
|
51 |
+
"loss": 2.4972,
|
52 |
+
"step": 30
|
53 |
+
},
|
54 |
+
{
|
55 |
+
"epoch": 1.67,
|
56 |
+
"learning_rate": 0.00010290000000000001,
|
57 |
+
"loss": 2.1316,
|
58 |
+
"step": 35
|
59 |
+
},
|
60 |
+
{
|
61 |
+
"epoch": 1.9,
|
62 |
+
"learning_rate": 0.00013415229447692924,
|
63 |
+
"loss": 2.3666,
|
64 |
+
"step": 40
|
65 |
+
},
|
66 |
+
{
|
67 |
+
"epoch": 2.0,
|
68 |
+
"eval_loss": 2.45241641998291,
|
69 |
+
"eval_runtime": 1.4175,
|
70 |
+
"eval_samples_per_second": 22.575,
|
71 |
+
"eval_steps_per_second": 2.822,
|
72 |
+
"step": 42
|
73 |
+
},
|
74 |
+
{
|
75 |
+
"epoch": 2.14,
|
76 |
+
"learning_rate": 0.00013040646433810595,
|
77 |
+
"loss": 2.2004,
|
78 |
+
"step": 45
|
79 |
+
},
|
80 |
+
{
|
81 |
+
"epoch": 2.38,
|
82 |
+
"learning_rate": 9.36623942715347e-05,
|
83 |
+
"loss": 2.2352,
|
84 |
+
"step": 50
|
85 |
+
},
|
86 |
+
{
|
87 |
+
"epoch": 2.62,
|
88 |
+
"learning_rate": 4.3537605728465284e-05,
|
89 |
+
"loss": 1.9823,
|
90 |
+
"step": 55
|
91 |
+
},
|
92 |
+
{
|
93 |
+
"epoch": 2.86,
|
94 |
+
"learning_rate": 6.793535661894062e-06,
|
95 |
+
"loss": 2.0814,
|
96 |
+
"step": 60
|
97 |
+
},
|
98 |
+
{
|
99 |
+
"epoch": 3.0,
|
100 |
+
"eval_loss": 2.412614583969116,
|
101 |
+
"eval_runtime": 1.4325,
|
102 |
+
"eval_samples_per_second": 22.338,
|
103 |
+
"eval_steps_per_second": 2.792,
|
104 |
+
"step": 63
|
105 |
+
},
|
106 |
+
{
|
107 |
+
"epoch": 3.1,
|
108 |
+
"learning_rate": 3.047705523070765e-06,
|
109 |
+
"loss": 1.9043,
|
110 |
+
"step": 65
|
111 |
+
},
|
112 |
+
{
|
113 |
+
"epoch": 3.33,
|
114 |
+
"learning_rate": 3.4300000000000014e-05,
|
115 |
+
"loss": 2.0794,
|
116 |
+
"step": 70
|
117 |
+
},
|
118 |
+
{
|
119 |
+
"epoch": 3.57,
|
120 |
+
"learning_rate": 8.386493606940314e-05,
|
121 |
+
"loss": 1.767,
|
122 |
+
"step": 75
|
123 |
+
},
|
124 |
+
{
|
125 |
+
"epoch": 3.81,
|
126 |
+
"learning_rate": 0.00012527997991807721,
|
127 |
+
"loss": 2.0631,
|
128 |
+
"step": 80
|
129 |
+
},
|
130 |
+
{
|
131 |
+
"epoch": 4.0,
|
132 |
+
"eval_loss": 2.377373456954956,
|
133 |
+
"eval_runtime": 1.4484,
|
134 |
+
"eval_samples_per_second": 22.093,
|
135 |
+
"eval_steps_per_second": 2.762,
|
136 |
+
"step": 84
|
137 |
+
},
|
138 |
+
{
|
139 |
+
"epoch": 4.05,
|
140 |
+
"learning_rate": 0.00013643379467904383,
|
141 |
+
"loss": 2.0061,
|
142 |
+
"step": 85
|
143 |
+
},
|
144 |
+
{
|
145 |
+
"epoch": 4.29,
|
146 |
+
"learning_rate": 0.00011137140040750914,
|
147 |
+
"loss": 1.6506,
|
148 |
+
"step": 90
|
149 |
+
},
|
150 |
+
{
|
151 |
+
"epoch": 4.52,
|
152 |
+
"learning_rate": 6.347351557997137e-05,
|
153 |
+
"loss": 1.9202,
|
154 |
+
"step": 95
|
155 |
+
},
|
156 |
+
{
|
157 |
+
"epoch": 4.76,
|
158 |
+
"learning_rate": 1.8312641592473912e-05,
|
159 |
+
"loss": 1.9009,
|
160 |
+
"step": 100
|
161 |
+
},
|
162 |
+
{
|
163 |
+
"epoch": 5.0,
|
164 |
+
"learning_rate": 0.0,
|
165 |
+
"loss": 1.7547,
|
166 |
+
"step": 105
|
167 |
+
},
|
168 |
+
{
|
169 |
+
"epoch": 5.0,
|
170 |
+
"eval_loss": 2.376068592071533,
|
171 |
+
"eval_runtime": 1.4234,
|
172 |
+
"eval_samples_per_second": 22.481,
|
173 |
+
"eval_steps_per_second": 2.81,
|
174 |
+
"step": 105
|
175 |
+
},
|
176 |
+
{
|
177 |
+
"epoch": 5.24,
|
178 |
+
"learning_rate": 1.8312641592473936e-05,
|
179 |
+
"loss": 1.6331,
|
180 |
+
"step": 110
|
181 |
+
},
|
182 |
+
{
|
183 |
+
"epoch": 5.48,
|
184 |
+
"learning_rate": 6.347351557997117e-05,
|
185 |
+
"loss": 1.7732,
|
186 |
+
"step": 115
|
187 |
+
},
|
188 |
+
{
|
189 |
+
"epoch": 5.71,
|
190 |
+
"learning_rate": 0.00011137140040750908,
|
191 |
+
"loss": 1.7347,
|
192 |
+
"step": 120
|
193 |
+
},
|
194 |
+
{
|
195 |
+
"epoch": 5.95,
|
196 |
+
"learning_rate": 0.00013643379467904383,
|
197 |
+
"loss": 1.5963,
|
198 |
+
"step": 125
|
199 |
+
},
|
200 |
+
{
|
201 |
+
"epoch": 6.0,
|
202 |
+
"eval_loss": 2.3798959255218506,
|
203 |
+
"eval_runtime": 1.4389,
|
204 |
+
"eval_samples_per_second": 22.239,
|
205 |
+
"eval_steps_per_second": 2.78,
|
206 |
+
"step": 126
|
207 |
+
},
|
208 |
+
{
|
209 |
+
"epoch": 6.19,
|
210 |
+
"learning_rate": 0.0001252799799180772,
|
211 |
+
"loss": 1.5787,
|
212 |
+
"step": 130
|
213 |
+
},
|
214 |
+
{
|
215 |
+
"epoch": 6.43,
|
216 |
+
"learning_rate": 8.386493606940322e-05,
|
217 |
+
"loss": 1.3787,
|
218 |
+
"step": 135
|
219 |
+
},
|
220 |
+
{
|
221 |
+
"epoch": 6.67,
|
222 |
+
"learning_rate": 3.429999999999998e-05,
|
223 |
+
"loss": 1.4709,
|
224 |
+
"step": 140
|
225 |
+
},
|
226 |
+
{
|
227 |
+
"epoch": 6.9,
|
228 |
+
"learning_rate": 3.0477055230707115e-06,
|
229 |
+
"loss": 1.7318,
|
230 |
+
"step": 145
|
231 |
+
},
|
232 |
+
{
|
233 |
+
"epoch": 7.0,
|
234 |
+
"eval_loss": 2.3730082511901855,
|
235 |
+
"eval_runtime": 1.4477,
|
236 |
+
"eval_samples_per_second": 22.104,
|
237 |
+
"eval_steps_per_second": 2.763,
|
238 |
+
"step": 147
|
239 |
+
},
|
240 |
+
{
|
241 |
+
"epoch": 7.14,
|
242 |
+
"learning_rate": 6.793535661894024e-06,
|
243 |
+
"loss": 1.4261,
|
244 |
+
"step": 150
|
245 |
+
},
|
246 |
+
{
|
247 |
+
"epoch": 7.38,
|
248 |
+
"learning_rate": 4.353760572846532e-05,
|
249 |
+
"loss": 1.2714,
|
250 |
+
"step": 155
|
251 |
+
},
|
252 |
+
{
|
253 |
+
"epoch": 7.62,
|
254 |
+
"learning_rate": 9.366239427153457e-05,
|
255 |
+
"loss": 1.3748,
|
256 |
+
"step": 160
|
257 |
+
},
|
258 |
+
{
|
259 |
+
"epoch": 7.86,
|
260 |
+
"learning_rate": 0.00013040646433810593,
|
261 |
+
"loss": 1.5499,
|
262 |
+
"step": 165
|
263 |
+
},
|
264 |
+
{
|
265 |
+
"epoch": 8.0,
|
266 |
+
"eval_loss": 2.3570337295532227,
|
267 |
+
"eval_runtime": 1.4478,
|
268 |
+
"eval_samples_per_second": 22.102,
|
269 |
+
"eval_steps_per_second": 2.763,
|
270 |
+
"step": 168
|
271 |
+
}
|
272 |
+
],
|
273 |
+
"max_steps": 210,
|
274 |
+
"num_train_epochs": 10,
|
275 |
+
"total_flos": 175588245504000.0,
|
276 |
+
"trial_name": null,
|
277 |
+
"trial_params": null
|
278 |
+
}
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aa109cb8608765a0746aef97b10ee403703c8243a3711c142b84ef899083f2cc
|
3 |
+
size 2671
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|