Update README.md
Browse files
README.md
CHANGED
@@ -25,22 +25,19 @@ parameters:
|
|
25 |
|
26 |
# distilgpt2-magicprompt-SD
|
27 |
|
|
|
|
|
|
|
28 |
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the Gustavosta/Stable-Diffusion-Prompts dataset.
|
29 |
It achieves the following results on the evaluation set:
|
30 |
- Loss: 1.3089
|
31 |
- eval_steps_per_second = 17.201
|
32 |
- perplexity = 3.7022
|
33 |
-
## Model description
|
34 |
-
|
35 |
-
More information needed
|
36 |
-
|
37 |
-
## Intended uses & limitations
|
38 |
|
39 |
-
More information needed
|
40 |
|
41 |
## Training and evaluation data
|
42 |
|
43 |
-
|
44 |
|
45 |
## Training procedure
|
46 |
|
|
|
25 |
|
26 |
# distilgpt2-magicprompt-SD
|
27 |
|
28 |
+
Generate/augment your prompt, stable diffusion style.
|
29 |
+
|
30 |
+
|
31 |
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the Gustavosta/Stable-Diffusion-Prompts dataset.
|
32 |
It achieves the following results on the evaluation set:
|
33 |
- Loss: 1.3089
|
34 |
- eval_steps_per_second = 17.201
|
35 |
- perplexity = 3.7022
|
|
|
|
|
|
|
|
|
|
|
36 |
|
|
|
37 |
|
38 |
## Training and evaluation data
|
39 |
|
40 |
+
refer to the `Gustavosta/Stable-Diffusion-Prompts` dataset.
|
41 |
|
42 |
## Training procedure
|
43 |
|