nicholascao commited on
Commit
39b6725
1 Parent(s): c0673f6

push model and README

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ widget:
4
+ - text: "Converts a simple image description into a prompt. Prompts are formatted as multiple related tags separated by commas, plus you can use () to increase the weight, [] to decrease the weight, or use a number to specify the weight. You should add appropriate words to make the images described in the prompt more aesthetically pleasing, but make sure there is a correlation between the input and output.\n### Input: 1 girl\n### Output:"
5
+ tags:
6
+ - pytorch
7
+ - transformers
8
+ - text-generation
9
+ ---
10
+ # BeautifulPrompt-v2
11
+
12
+ ## 简介 Brief Introduction
13
+
14
+ 我们开源了一个自动Prompt生成模型,您可以直接输入一个极其简单的Prompt,就可以得到经过语言模型优化过的Prompt,帮助您更简单地生成高颜值图像。相比[v1](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd), 我们提升了复杂场景下的表现以及增加了生成权重(配合sd-webui使用)的能力。
15
+
16
+ We release an automatic Prompt generation model, you can directly enter an extremely simple Prompt and get a Prompt optimized by the language model to help you generate more beautiful images simply. Compared with [v1](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd), we have improved the performance in complex scenarios and increased the ability to generate weights (use with sd-webui).
17
+
18
+ * Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
19
+
20
+ ## 使用 Usage
21
+
22
+ ```python
23
+ from transformers import AutoTokenizer, AutoModelForCausalLM
24
+ tokenizer = AutoTokenizer.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd-v2')
25
+ model = AutoModelForCausalLM.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd-v2').eval().cuda()
26
+ raw_prompt = '1 girl'
27
+
28
+ TEMPLATE_V2 = 'Converts a simple image description into a prompt. \
29
+ Prompts are formatted as multiple related tags separated by commas, plus you can use () to increase the weight, [] to decrease the weight, \
30
+ or use a number to specify the weight. You should add appropriate words to make the images described in the prompt more aesthetically pleasing, \
31
+ but make sure there is a correlation between the input and output.\n\
32
+ ### Input: {raw_prompt}\n### Output:'
33
+
34
+ input = TEMPLATE_V2.format(raw_prompt=raw_prompt)
35
+ input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
36
+ outputs = model.generate(
37
+ input_ids,
38
+ max_length=384,
39
+ do_sample=True,
40
+ temperature=0.9,
41
+ top_k=50,
42
+ top_p=0.95,
43
+ repetition_penalty=1.1,
44
+ num_return_sequences=5)
45
+
46
+ prompts = tokenizer.batch_decode(outputs[:, input_ids.size(1):], skip_special_tokens=True)
47
+ prompts = [p.strip() for p in prompts]
48
+ print(prompts)
49
+ ```
50
+
51
+ ## 作品展示 Gallery
52
+ <style>
53
+ table th:first-of-type {
54
+ width: 50%;
55
+ }
56
+ table th:nth-of-type(2) {
57
+ width: 50%;
58
+ }
59
+ </style>
60
+ | Before | After |
61
+ | ---------------------------------------- | ---------------------------------- |
62
+ | prompt: a beautiful girl | prompt: (8k, RAW photo, best quality, masterpiece:1.2), (realistic, photo-realistic:1.37), octane render, ultra high res, photon mapping, radiosity, physically-based rendering, ue5, ((white dress)), ((long hair)), ((beautiful face)), ((light brown eyes)), ((smile))) extremely detailed CG unity 8k wallpaper, makeup, (glowing lips), (fantasy lining), (intricate details), light bokeh, (sharp focus) centered at the center of the face (wide angle:0.6), full body |
63
+ | ![](imgs/2023-08-29_15-22-45_2442.png) ![](imgs/2023-08-29_15-23-01_7579.png) | ![](imgs/2023-08-29_15-28-21_2272.png) ![](imgs/2023-08-29_15-28-37_2750.png) |
64
+
65
+
66
+ | Before | After |
67
+ | ---------------------------------------- | ---------------------------------- |
68
+ | prompt: Astronaut rides horse | prompt: (masterpiece), (best quality), astronaut on horseback, (rides horse), ( helmet ), (standing on horseback), panorama, looking ahead, detailed background, solo |
69
+ | ![](imgs/2023-08-29_15-30-52_5812.png) ![](imgs/2023-08-29_15-31-08_8054.png) | ![](imgs/2023-08-29_15-33-31_7554.png) ![](imgs/2023-08-29_15-33-49_3184.png) |
70
+
71
+ > generated by [sd-xl-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
72
+
73
+
74
+ ## 使用须知 Notice for Use
75
+
76
+ 使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
77
+
78
+ If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
79
+
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "alibaba-pai/pai-bloom-1b1-text2prompt-sd-v2",
3
+ "apply_residual_connection_post_layernorm": false,
4
+ "architectures": [
5
+ "BloomForCausalLM"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "attention_softmax_in_fp32": true,
9
+ "bias_dropout_fusion": true,
10
+ "bos_token_id": 1,
11
+ "eos_token_id": 2,
12
+ "hidden_dropout": 0.0,
13
+ "hidden_size": 1536,
14
+ "initializer_range": 0.02,
15
+ "layer_norm_epsilon": 1e-05,
16
+ "masked_softmax_fusion": true,
17
+ "model_type": "bloom",
18
+ "n_head": 16,
19
+ "n_inner": null,
20
+ "n_layer": 24,
21
+ "offset_alibi": 100,
22
+ "pad_token_id": 3,
23
+ "pretraining_tp": 1,
24
+ "skip_bias_add": true,
25
+ "skip_bias_add_qkv": false,
26
+ "slow_but_exact": false,
27
+ "torch_dtype": "float16",
28
+ "transformers_version": "4.30.0",
29
+ "unk_token_id": 0,
30
+ "use_cache": true,
31
+ "vocab_size": 250880
32
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 3,
6
+ "transformers_version": "4.30.0"
7
+ }
imgs/2023-08-29_15-22-45_2442.png ADDED
imgs/2023-08-29_15-23-01_7579.png ADDED
imgs/2023-08-29_15-28-21_2272.png ADDED
imgs/2023-08-29_15-28-37_2750.png ADDED
imgs/2023-08-29_15-30-52_5812.png ADDED
imgs/2023-08-29_15-31-08_8054.png ADDED
imgs/2023-08-29_15-33-31_7554.png ADDED
imgs/2023-08-29_15-33-49_3184.png ADDED
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f73c5c8902bd384c6f77ef2857a124a782e2ce79ea1eda6868d50505a7136f36
3
+ size 2130723617
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "<pad>",
5
+ "unk_token": "<unk>"
6
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c9597d88490e2d1215db6f35ebc1ed6ef1294a7ee829e36078b74c0a8ddaadf
3
+ size 14500723
tokenizer_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<s>",
4
+ "clean_up_tokenization_spaces": false,
5
+ "eos_token": "</s>",
6
+ "model_max_length": 1000000000000000019884624838656,
7
+ "pad_token": "<pad>",
8
+ "padding_side": "left",
9
+ "tokenizer_class": "BloomTokenizer",
10
+ "unk_token": "<unk>"
11
+ }