File size: 3,072 Bytes
d458fb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b66fb6b
d458fb1
f76333f
16240c9
d458fb1
 
 
 
 
 
 
 
de80314
d458fb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: cc-by-nc-4.0
datasets:
- AdamCodd/Civitai-8m-prompts
metrics:
- rouge
base_model: t5-small
model-index:
- name: t5-small-negative-prompt-generator
  results:
  - task:
      type: text-generation
      name: Text Generation
    metrics:
    - type: loss
      value: 0.173
    - type: rouge-1
      value: 63.86
      name: Validation ROUGE-1
    - type: rouge-2
      value: 47.5195
      name: Validation ROUGE-2
    - type: rouge-l
      value: 62.0977
      name: Validation ROUGE-L
widget:
- text: masterpiece, 1girl, looking at viewer, sitting, tea, table, garden
  example_title: Prompt
pipeline_tag: text2text-generation
inference: false
---
## t5-small-negative-prompt-generator
The model here is [t5-small](https://huggingface.co/google-t5/t5-small) and has been finetuned on a subset of the [AdamCodd/Civitai-8m-prompts](https://huggingface.co/datasets/AdamCodd/Civitai-8m-prompts) dataset (~800K prompts) focused on the top 10% prompts according to Civitai's positive engagement ("stats" field in the dataset). The dataset includes negative embeddings (and thus will output them).

It achieves the following results on the evaluation set:
* Loss: 0.1730
* Rouge1: 63.8600
* Rouge2: 47.5195
* Rougel: 62.0977
* Rougelsum: 62.1006

The idea behind this is to automatically generate negative prompts that improve the end result according to the positive prompt input. I believe it could be useful to display suggestions for new users who use stable-diffusion or similar.

The license is **cc-by-nc-4.0**. For commercial use rights, please [contact me](https://discord.com/users/859202914400075798).

## Usage

The length of the negative prompt is adjustable with the `max_new_tokens` parameter. Keep in mind that you'll need to adjust the samplers slightly to avoid repetition and improve the quality of the output.

```python
from transformers import pipeline

text2text_generator = pipeline("text2text-generation", model="AdamCodd/t5-small-negative-prompt-generator")

generated_text = text2text_generator(
    "masterpiece, 1girl, looking at viewer, sitting, tea, table, garden",
    do_sample=True,
    max_new_tokens=50,
    repetition_penalty=1.2,
    no_repeat_ngram_size=2,
    temperature=0.9,
    top_p=0.92
)

print(generated_text)
# [{'generated_text': 'easynegative, badhandv4, (worst quality:2), (low quality lowres:1), blurry, text'}]
```
This model has been trained exclusively on stable-diffusion prompts (SD1.4, SD1.5, SD2.1, SDXL...) so it might not work as well on non-stable-diffusion models.

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- Mixed precision
- num_epochs: 1
- weight_decay: 0.01

### Framework versions

- Transformers 4.36.2
- Datasets 2.16.1
- Tokenizers 0.15.0
- Evaluate 0.4.1

If you want to support me, you can [here](https://ko-fi.com/adamcodd).