patrickvonplaten
commited on
Commit
•
dcce3d9
1
Parent(s):
5a80acc
upload
Browse files- README.md +157 -0
- feature_extractor/preprocessor_config.json +20 -0
- model_index.json +32 -0
- safety_checker/config.json +179 -0
- safety_checker/pytorch_model.bin +3 -0
- scheduler/scheduler_config.json +12 -0
- text_encoder/config.json +31 -0
- text_encoder/pytorch_model.bin +3 -0
- tokenizer/sentencepiece.bpe.model +3 -0
- tokenizer/special_tokens_map.json +15 -0
- tokenizer/tokenizer_config.json +22 -0
- unet/config.json +36 -0
- unet/diffusion_pytorch_model.bin +3 -0
- vae/config.json +29 -0
- vae/diffusion_pytorch_model.bin +3 -0
README.md
ADDED
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: zh
|
3 |
+
license: creativeml-openrail-m
|
4 |
+
|
5 |
+
tags:
|
6 |
+
- stable-diffusion
|
7 |
+
- stable-diffusion-diffusers
|
8 |
+
- text-to-image
|
9 |
+
- zh
|
10 |
+
- Chinese
|
11 |
+
|
12 |
+
inference: false
|
13 |
+
extra_gated_prompt: |-
|
14 |
+
One more step before getting this model.
|
15 |
+
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
|
16 |
+
The CreativeML OpenRAIL License specifies:
|
17 |
+
|
18 |
+
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
|
19 |
+
2. IDEA-CCNL claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
|
20 |
+
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
|
21 |
+
Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
|
22 |
+
|
23 |
+
By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well.
|
24 |
+
extra_gated_fields:
|
25 |
+
I have read the License and agree with its terms: checkbox
|
26 |
+
---
|
27 |
+
|
28 |
+
# AltDiffusion
|
29 |
+
|
30 |
+
| 名称 Name | 任务 Task | 语言 Language(s) | 模型 Model | Github |
|
31 |
+
|:----------:| :----: |:-------------------:| :----: |:------:|
|
32 |
+
| AltDiffusion | 多模态 Multimodal | 中英文 Chinese&English | Stable Diffusion | [FlaAI](https://github.com/FlagAI-Open/FlagAI) |
|
33 |
+
|
34 |
+
|
35 |
+
# 模型信息 Model Information
|
36 |
+
|
37 |
+
我们使用 [AltCLIP](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) 作为text encoder,基于 [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) 训练了双语Diffusion模型,训练数据来自 [WuDao数据集](https://data.baai.ac.cn/details/WuDaoCorporaText) 和 [LAION](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus) 。
|
38 |
+
|
39 |
+
我们的版本在中英文对齐方面表现非常出色,是目前市面上开源的最强版本,保留了原版stable diffusion的大部分能力,并且在某些例子上比有着比原版模型更出色的能力。
|
40 |
+
|
41 |
+
AltDiffusion 模型由名为 AltCLIP 的双语 CLIP 模型支持,该模型也可在本项目中访问。您可以阅读 [此教程](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) 了解更多信息。
|
42 |
+
|
43 |
+
AltDiffusion模型现在支持线上演示,点击 [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/BAAI/FlagStudio) 在线试玩!
|
44 |
+
|
45 |
+
Our model performs well in aligning Chinese and English, and is the strongest open source version on the market today, retaining most of the stable diffusion capabilities of the original, and in some cases even better than the original model.
|
46 |
+
|
47 |
+
We used [AltCLIP](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) as the text encoder, and trained a bilingual Diffusion model based on [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion), with training data from [WuDao dataset](https://data.baai.ac.cn/details/WuDaoCorporaText) and [LAION](https://huggingface.co/datasets/laion/laion2B-en).
|
48 |
+
|
49 |
+
AltDiffusion model is backed by a bilingual CLIP model named AltCLIP, which is also accessible in FlagAI. You can read [this tutorial](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) for more information.
|
50 |
+
|
51 |
+
AltDiffusion now supports online demo,Try out it by clicking [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/BAAI/FlagStudio) !
|
52 |
+
|
53 |
+
# 模型权重 Model Weights
|
54 |
+
|
55 |
+
第一次运行AltDiffusion模型时会自动从 [这里](https://model.baai.ac.cn/model-detail/100076) 下载如下权重,
|
56 |
+
|
57 |
+
The following weights are automatically downloaded from [here](https://model.baai.ac.cn/model-detail/100076) when the AltDiffusion model is run for the first time:
|
58 |
+
|
59 |
+
| 模型名称 Model name | 大小 Size | 描述 Description |
|
60 |
+
|------------------------------|---------|-------------------------------------------------------|
|
61 |
+
| StableDiffusionSafetyChecker | 1.13G | 图片的安全检查器;Safety checker for image |
|
62 |
+
| AltDiffusion | 8.0G | 我们的双语AltDiffusion模型; Our bilingual AltDiffusion model |
|
63 |
+
| AltCLIP | 3.22G | 我们的双语AltCLIP模型;Our bilingual AltCLIP model |
|
64 |
+
|
65 |
+
|
66 |
+
# 示例 Example
|
67 |
+
|
68 |
+
以下示例将为文本输入`Anime portrait of natalie portman as an anime girl by stanley artgerm lau, wlop, rossdraws, james jean, andrei riabovitchev, marc simonetti, and sakimichan, trending on artstation` 在目录`./AltDiffusionOutputs`下生成图片结果。
|
69 |
+
|
70 |
+
The following example will generate image results for text input `Anime portrait of natalie portman as an anime girl by stanley artgerm lau, wlop, rossdraws, james jean, andrei riabovitchev, marc simonetti, and sakimichan, trending on artstation` under the default output directory `./AltDiffusionOutputs`
|
71 |
+
|
72 |
+
```python
|
73 |
+
import torch
|
74 |
+
from flagai.auto_model.auto_loader import AutoLoader
|
75 |
+
from flagai.model.predictor.predictor import Predictor
|
76 |
+
|
77 |
+
# Initialize
|
78 |
+
prompt = "Anime portrait of natalie portman as an anime girl by stanley artgerm lau, wlop, rossdraws, james jean, andrei riabovitchev, marc simonetti, and sakimichan, trending on artstation"
|
79 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
80 |
+
|
81 |
+
|
82 |
+
loader = AutoLoader(task_name="text2img", #contrastive learning
|
83 |
+
model_name="AltDiffusion",
|
84 |
+
model_dir="./checkpoints")
|
85 |
+
|
86 |
+
model = loader.get_model()
|
87 |
+
model.eval()
|
88 |
+
model.to(device)
|
89 |
+
predictor = Predictor(model)
|
90 |
+
predictor.predict_generate_images(prompt)
|
91 |
+
```
|
92 |
+
|
93 |
+
|
94 |
+
您可以在`predict_generate_images`函数里通过改变参数来调整设置,具体信息如下:
|
95 |
+
|
96 |
+
More parameters of predict_generate_images for you to adjust for `predict_generate_images` are listed below:
|
97 |
+
|
98 |
+
|
99 |
+
| 参数名 Parameter | 类型 Type | 描述 Description |
|
100 |
+
|--------------------------------|------------|-------------------------------------------------------|
|
101 |
+
| prompt | str | 提示文本; The prompt text |
|
102 |
+
| out_path | str | 输出路径; The output path to save images |
|
103 |
+
| n_samples | int | 输出图片数量; Number of images to be generate |
|
104 |
+
| skip_grid | bool | 如果为True, 会将所有图片拼接在一起,输出一张新的图片; If set to true, image gridding step will be skipped |
|
105 |
+
| ddim_step | int | DDIM模型的步数; Number of steps in ddim model |
|
106 |
+
| plms | bool | 如果为True, 则会使用plms模型; If set to true, PLMS Sampler instead of DDIM Sampler will be applied |
|
107 |
+
| scale | float | 这个值决定了文本在多大程度上影响生成的图片,值越大影响力越强; This value determines how important the prompt incluences generate images |
|
108 |
+
| H | int | 图片的高度; Height of image |
|
109 |
+
| W | int | 图片的宽度; Width of image |
|
110 |
+
| C | int | 图片的channel数; Numeber of channels of generated images |
|
111 |
+
| seed | int | 随机种子; Random seed number |
|
112 |
+
|
113 |
+
注意:模型推理要求一张至少10G以上的GPU。
|
114 |
+
|
115 |
+
Note that the model inference requires a GPU of at least 10G above.
|
116 |
+
|
117 |
+
|
118 |
+
# 更多生成结果 More Results
|
119 |
+
|
120 |
+
## 中英文对齐能力 Chinese and English alignment ability
|
121 |
+
|
122 |
+
### prompt:dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap
|
123 |
+
### 英文生成结果/Generated results from English prompts
|
124 |
+
|
125 |
+
![image](https://github.com/FlagAI-Open/FlagAI/blob/master/examples/AltDiffusion/imgs/en_%E6%9A%97%E9%BB%91%E7%B2%BE%E7%81%B5.png)
|
126 |
+
|
127 |
+
### prompt:黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图
|
128 |
+
### 中文生成结果/Generated results from Chinese prompts
|
129 |
+
![image](https://github.com/FlagAI-Open/FlagAI/blob/master/examples/AltDiffusion/imgs/cn_%E6%9A%97%E9%BB%91%E7%B2%BE%E7%81%B5.png)
|
130 |
+
|
131 |
+
## 中文表现能力/The performance for Chinese prompts
|
132 |
+
|
133 |
+
## prompt:带墨镜的男孩肖像,充满细节,8K高清
|
134 |
+
![image](https://github.com/FlagAI-Open/FlagAI/blob/master/examples/AltDiffusion/imgs/%E5%B0%8F%E7%94%B7%E5%AD%A9.png)
|
135 |
+
|
136 |
+
|
137 |
+
## prompt:带墨镜的中国男孩肖像,充满细节,8K高清
|
138 |
+
![image](https://github.com/FlagAI-Open/FlagAI/blob/master/examples/AltDiffusion/imgs/cn_%E5%B0%8F%E7%94%B7%E5%AD%A9.png)
|
139 |
+
|
140 |
+
## 长图生成能力/The ability to generate long images
|
141 |
+
|
142 |
+
### prompt: 一只带着帽子的小狗
|
143 |
+
### 原版 stable diffusion:
|
144 |
+
![image](https://github.com/FlagAI-Open/FlagAI/blob/master/examples/AltDiffusion/imgs/%E5%A4%9A%E5%B0%BA%E5%BA%A6%E7%8B%97%EF%BC%88%E4%B8%8D%E5%A5%BD%EF%BC%89.png)
|
145 |
+
|
146 |
+
### Ours:
|
147 |
+
![image](https://github.com/FlagAI-Open/FlagAI/blob/master/examples/AltDiffusion/imgs/%E5%A4%9A%E5%B0%BA%E5%BA%A6%E7%8B%97%EF%BC%88%E5%A5%BD%EF%BC%89.png)
|
148 |
+
|
149 |
+
注: 此处长图生成技术由右脑科技(RightBrain AI)提供。
|
150 |
+
|
151 |
+
Note: The long image generation technology here is provided by Right Brain Technology.
|
152 |
+
|
153 |
+
# 许可/License
|
154 |
+
|
155 |
+
该模型通过 [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) 获得许可。作者对您生成的输出不主张任何权利,您可以自由使用它们并对它们的使用负责,不得违反本许可中的规定。该许可证禁止您分享任何违反任何法律、对他人造成伤害、传播任何可能造成伤害���个人信息、传播错误信息和针对弱势群体的任何内容。您可以出于商业目的修改和使用模型,但必须包含相同使用限制的副本。有关限制的完整列表,请[阅读许可证](https://huggingface.co/spaces/CompVis/stable-diffusion-license) 。
|
156 |
+
|
157 |
+
The model is licensed with a [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license). The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. You can modify and use the model for commercial purposes, but a copy of the same use restrictions must be included. For the full list of restrictions please [read the license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) .
|
feature_extractor/preprocessor_config.json
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"crop_size": 224,
|
3 |
+
"do_center_crop": true,
|
4 |
+
"do_convert_rgb": true,
|
5 |
+
"do_normalize": true,
|
6 |
+
"do_resize": true,
|
7 |
+
"feature_extractor_type": "CLIPFeatureExtractor",
|
8 |
+
"image_mean": [
|
9 |
+
0.48145466,
|
10 |
+
0.4578275,
|
11 |
+
0.40821073
|
12 |
+
],
|
13 |
+
"image_std": [
|
14 |
+
0.26862954,
|
15 |
+
0.26130258,
|
16 |
+
0.27577711
|
17 |
+
],
|
18 |
+
"resample": 3,
|
19 |
+
"size": 224
|
20 |
+
}
|
model_index.json
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "StableDiffusionPipeline",
|
3 |
+
"_diffusers_version": "0.7.2",
|
4 |
+
"feature_extractor": [
|
5 |
+
"transformers",
|
6 |
+
"CLIPFeatureExtractor"
|
7 |
+
],
|
8 |
+
"safety_checker": [
|
9 |
+
"stable_diffusion",
|
10 |
+
"StableDiffusionSafetyChecker"
|
11 |
+
],
|
12 |
+
"scheduler": [
|
13 |
+
"diffusers",
|
14 |
+
"DDIMScheduler"
|
15 |
+
],
|
16 |
+
"text_encoder": [
|
17 |
+
"transformers",
|
18 |
+
"CLIPTextModel"
|
19 |
+
],
|
20 |
+
"tokenizer": [
|
21 |
+
"transformers",
|
22 |
+
"XLMRobertaTokenizer"
|
23 |
+
],
|
24 |
+
"unet": [
|
25 |
+
"diffusers",
|
26 |
+
"UNet2DConditionModel"
|
27 |
+
],
|
28 |
+
"vae": [
|
29 |
+
"diffusers",
|
30 |
+
"AutoencoderKL"
|
31 |
+
]
|
32 |
+
}
|
safety_checker/config.json
ADDED
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_commit_hash": "4bb648a606ef040e7685bde262611766a5fdd67b",
|
3 |
+
"_name_or_path": "CompVis/stable-diffusion-safety-checker",
|
4 |
+
"architectures": [
|
5 |
+
"StableDiffusionSafetyChecker"
|
6 |
+
],
|
7 |
+
"initializer_factor": 1.0,
|
8 |
+
"logit_scale_init_value": 2.6592,
|
9 |
+
"model_type": "clip",
|
10 |
+
"projection_dim": 768,
|
11 |
+
"text_config": {
|
12 |
+
"_name_or_path": "",
|
13 |
+
"add_cross_attention": false,
|
14 |
+
"architectures": null,
|
15 |
+
"attention_dropout": 0.0,
|
16 |
+
"bad_words_ids": null,
|
17 |
+
"begin_suppress_tokens": null,
|
18 |
+
"bos_token_id": 0,
|
19 |
+
"chunk_size_feed_forward": 0,
|
20 |
+
"cross_attention_hidden_size": null,
|
21 |
+
"decoder_start_token_id": null,
|
22 |
+
"diversity_penalty": 0.0,
|
23 |
+
"do_sample": false,
|
24 |
+
"dropout": 0.0,
|
25 |
+
"early_stopping": false,
|
26 |
+
"encoder_no_repeat_ngram_size": 0,
|
27 |
+
"eos_token_id": 2,
|
28 |
+
"exponential_decay_length_penalty": null,
|
29 |
+
"finetuning_task": null,
|
30 |
+
"forced_bos_token_id": null,
|
31 |
+
"forced_eos_token_id": null,
|
32 |
+
"hidden_act": "quick_gelu",
|
33 |
+
"hidden_size": 768,
|
34 |
+
"id2label": {
|
35 |
+
"0": "LABEL_0",
|
36 |
+
"1": "LABEL_1"
|
37 |
+
},
|
38 |
+
"initializer_factor": 1.0,
|
39 |
+
"initializer_range": 0.02,
|
40 |
+
"intermediate_size": 3072,
|
41 |
+
"is_decoder": false,
|
42 |
+
"is_encoder_decoder": false,
|
43 |
+
"label2id": {
|
44 |
+
"LABEL_0": 0,
|
45 |
+
"LABEL_1": 1
|
46 |
+
},
|
47 |
+
"layer_norm_eps": 1e-05,
|
48 |
+
"length_penalty": 1.0,
|
49 |
+
"max_length": 20,
|
50 |
+
"max_position_embeddings": 77,
|
51 |
+
"min_length": 0,
|
52 |
+
"model_type": "clip_text_model",
|
53 |
+
"no_repeat_ngram_size": 0,
|
54 |
+
"num_attention_heads": 12,
|
55 |
+
"num_beam_groups": 1,
|
56 |
+
"num_beams": 1,
|
57 |
+
"num_hidden_layers": 12,
|
58 |
+
"num_return_sequences": 1,
|
59 |
+
"output_attentions": false,
|
60 |
+
"output_hidden_states": false,
|
61 |
+
"output_scores": false,
|
62 |
+
"pad_token_id": 1,
|
63 |
+
"prefix": null,
|
64 |
+
"problem_type": null,
|
65 |
+
"pruned_heads": {},
|
66 |
+
"remove_invalid_values": false,
|
67 |
+
"repetition_penalty": 1.0,
|
68 |
+
"return_dict": true,
|
69 |
+
"return_dict_in_generate": false,
|
70 |
+
"sep_token_id": null,
|
71 |
+
"suppress_tokens": null,
|
72 |
+
"task_specific_params": null,
|
73 |
+
"temperature": 1.0,
|
74 |
+
"tf_legacy_loss": false,
|
75 |
+
"tie_encoder_decoder": false,
|
76 |
+
"tie_word_embeddings": true,
|
77 |
+
"tokenizer_class": null,
|
78 |
+
"top_k": 50,
|
79 |
+
"top_p": 1.0,
|
80 |
+
"torch_dtype": null,
|
81 |
+
"torchscript": false,
|
82 |
+
"transformers_version": "4.24.0",
|
83 |
+
"typical_p": 1.0,
|
84 |
+
"use_bfloat16": false,
|
85 |
+
"vocab_size": 49408
|
86 |
+
},
|
87 |
+
"text_config_dict": {
|
88 |
+
"hidden_size": 768,
|
89 |
+
"intermediate_size": 3072,
|
90 |
+
"num_attention_heads": 12,
|
91 |
+
"num_hidden_layers": 12
|
92 |
+
},
|
93 |
+
"torch_dtype": "float32",
|
94 |
+
"transformers_version": null,
|
95 |
+
"vision_config": {
|
96 |
+
"_name_or_path": "",
|
97 |
+
"add_cross_attention": false,
|
98 |
+
"architectures": null,
|
99 |
+
"attention_dropout": 0.0,
|
100 |
+
"bad_words_ids": null,
|
101 |
+
"begin_suppress_tokens": null,
|
102 |
+
"bos_token_id": null,
|
103 |
+
"chunk_size_feed_forward": 0,
|
104 |
+
"cross_attention_hidden_size": null,
|
105 |
+
"decoder_start_token_id": null,
|
106 |
+
"diversity_penalty": 0.0,
|
107 |
+
"do_sample": false,
|
108 |
+
"dropout": 0.0,
|
109 |
+
"early_stopping": false,
|
110 |
+
"encoder_no_repeat_ngram_size": 0,
|
111 |
+
"eos_token_id": null,
|
112 |
+
"exponential_decay_length_penalty": null,
|
113 |
+
"finetuning_task": null,
|
114 |
+
"forced_bos_token_id": null,
|
115 |
+
"forced_eos_token_id": null,
|
116 |
+
"hidden_act": "quick_gelu",
|
117 |
+
"hidden_size": 1024,
|
118 |
+
"id2label": {
|
119 |
+
"0": "LABEL_0",
|
120 |
+
"1": "LABEL_1"
|
121 |
+
},
|
122 |
+
"image_size": 224,
|
123 |
+
"initializer_factor": 1.0,
|
124 |
+
"initializer_range": 0.02,
|
125 |
+
"intermediate_size": 4096,
|
126 |
+
"is_decoder": false,
|
127 |
+
"is_encoder_decoder": false,
|
128 |
+
"label2id": {
|
129 |
+
"LABEL_0": 0,
|
130 |
+
"LABEL_1": 1
|
131 |
+
},
|
132 |
+
"layer_norm_eps": 1e-05,
|
133 |
+
"length_penalty": 1.0,
|
134 |
+
"max_length": 20,
|
135 |
+
"min_length": 0,
|
136 |
+
"model_type": "clip_vision_model",
|
137 |
+
"no_repeat_ngram_size": 0,
|
138 |
+
"num_attention_heads": 16,
|
139 |
+
"num_beam_groups": 1,
|
140 |
+
"num_beams": 1,
|
141 |
+
"num_channels": 3,
|
142 |
+
"num_hidden_layers": 24,
|
143 |
+
"num_return_sequences": 1,
|
144 |
+
"output_attentions": false,
|
145 |
+
"output_hidden_states": false,
|
146 |
+
"output_scores": false,
|
147 |
+
"pad_token_id": null,
|
148 |
+
"patch_size": 14,
|
149 |
+
"prefix": null,
|
150 |
+
"problem_type": null,
|
151 |
+
"pruned_heads": {},
|
152 |
+
"remove_invalid_values": false,
|
153 |
+
"repetition_penalty": 1.0,
|
154 |
+
"return_dict": true,
|
155 |
+
"return_dict_in_generate": false,
|
156 |
+
"sep_token_id": null,
|
157 |
+
"suppress_tokens": null,
|
158 |
+
"task_specific_params": null,
|
159 |
+
"temperature": 1.0,
|
160 |
+
"tf_legacy_loss": false,
|
161 |
+
"tie_encoder_decoder": false,
|
162 |
+
"tie_word_embeddings": true,
|
163 |
+
"tokenizer_class": null,
|
164 |
+
"top_k": 50,
|
165 |
+
"top_p": 1.0,
|
166 |
+
"torch_dtype": null,
|
167 |
+
"torchscript": false,
|
168 |
+
"transformers_version": "4.24.0",
|
169 |
+
"typical_p": 1.0,
|
170 |
+
"use_bfloat16": false
|
171 |
+
},
|
172 |
+
"vision_config_dict": {
|
173 |
+
"hidden_size": 1024,
|
174 |
+
"intermediate_size": 4096,
|
175 |
+
"num_attention_heads": 16,
|
176 |
+
"num_hidden_layers": 24,
|
177 |
+
"patch_size": 14
|
178 |
+
}
|
179 |
+
}
|
safety_checker/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7cebe6408f67beaf5e2272c4a90244472c7c54bba28dd27a21ec8b4b98f5bcd9
|
3 |
+
size 1216071332
|
scheduler/scheduler_config.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "DDIMScheduler",
|
3 |
+
"_diffusers_version": "0.7.2",
|
4 |
+
"beta_end": 0.012,
|
5 |
+
"beta_schedule": "scaled_linear",
|
6 |
+
"beta_start": 0.00085,
|
7 |
+
"clip_sample": false,
|
8 |
+
"num_train_timesteps": 1000,
|
9 |
+
"set_alpha_to_one": false,
|
10 |
+
"steps_offset": 1,
|
11 |
+
"trained_betas": null
|
12 |
+
}
|
text_encoder/config.json
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "xlm-roberta-large",
|
3 |
+
"architectures": [
|
4 |
+
"RobertaSeriesModelWithTransformation"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.0,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"classifier_dropout": null,
|
9 |
+
"eos_token_id": 2,
|
10 |
+
"hidden_act": "gelu",
|
11 |
+
"hidden_dropout_prob": 0.0,
|
12 |
+
"hidden_size": 1024,
|
13 |
+
"initializer_range": 0.02,
|
14 |
+
"intermediate_size": 4096,
|
15 |
+
"layer_norm_eps": 1e-05,
|
16 |
+
"learn_encoder": false,
|
17 |
+
"max_position_embeddings": 514,
|
18 |
+
"model_type": "xlm-roberta",
|
19 |
+
"num_attention_heads": 16,
|
20 |
+
"num_hidden_layers": 24,
|
21 |
+
"output_past": true,
|
22 |
+
"pad_token_id": 1,
|
23 |
+
"pooler_fn": "cls",
|
24 |
+
"position_embedding_type": "absolute",
|
25 |
+
"project_dim": 768,
|
26 |
+
"torch_dtype": "float32",
|
27 |
+
"transformers_version": "4.24.0",
|
28 |
+
"type_vocab_size": 1,
|
29 |
+
"use_cache": true,
|
30 |
+
"vocab_size": 250002
|
31 |
+
}
|
text_encoder/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6f7e5b6bef6737f75d01705be1d59295f5c9d84c10b9715a1610160fb646245d
|
3 |
+
size 2242851593
|
tokenizer/sentencepiece.bpe.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
|
3 |
+
size 5069051
|
tokenizer/special_tokens_map.json
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<s>",
|
3 |
+
"cls_token": "<s>",
|
4 |
+
"eos_token": "</s>",
|
5 |
+
"mask_token": {
|
6 |
+
"content": "<mask>",
|
7 |
+
"lstrip": true,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false
|
11 |
+
},
|
12 |
+
"pad_token": "<pad>",
|
13 |
+
"sep_token": "</s>",
|
14 |
+
"unk_token": "<unk>"
|
15 |
+
}
|
tokenizer/tokenizer_config.json
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<s>",
|
3 |
+
"cls_token": "<s>",
|
4 |
+
"eos_token": "</s>",
|
5 |
+
"mask_token": {
|
6 |
+
"__type": "AddedToken",
|
7 |
+
"content": "<mask>",
|
8 |
+
"lstrip": true,
|
9 |
+
"normalized": true,
|
10 |
+
"rstrip": false,
|
11 |
+
"single_word": false
|
12 |
+
},
|
13 |
+
"model_max_length": 512,
|
14 |
+
"name_or_path": "/sharefs/baai-mrnd/yzd/test/xlm-roberta-large",
|
15 |
+
"pad_token": "<pad>",
|
16 |
+
"processor_class": "CHCLIPProcess",
|
17 |
+
"sep_token": "</s>",
|
18 |
+
"sp_model_kwargs": {},
|
19 |
+
"special_tokens_map_file": null,
|
20 |
+
"tokenizer_class": "XLMRobertaTokenizer",
|
21 |
+
"unk_token": "<unk>"
|
22 |
+
}
|
unet/config.json
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "UNet2DConditionModel",
|
3 |
+
"_diffusers_version": "0.7.2",
|
4 |
+
"act_fn": "silu",
|
5 |
+
"attention_head_dim": 8,
|
6 |
+
"block_out_channels": [
|
7 |
+
320,
|
8 |
+
640,
|
9 |
+
1280,
|
10 |
+
1280
|
11 |
+
],
|
12 |
+
"center_input_sample": false,
|
13 |
+
"cross_attention_dim": 768,
|
14 |
+
"down_block_types": [
|
15 |
+
"CrossAttnDownBlock2D",
|
16 |
+
"CrossAttnDownBlock2D",
|
17 |
+
"CrossAttnDownBlock2D",
|
18 |
+
"DownBlock2D"
|
19 |
+
],
|
20 |
+
"downsample_padding": 1,
|
21 |
+
"flip_sin_to_cos": true,
|
22 |
+
"freq_shift": 0,
|
23 |
+
"in_channels": 4,
|
24 |
+
"layers_per_block": 2,
|
25 |
+
"mid_block_scale_factor": 1,
|
26 |
+
"norm_eps": 1e-05,
|
27 |
+
"norm_num_groups": 32,
|
28 |
+
"out_channels": 4,
|
29 |
+
"sample_size": 32,
|
30 |
+
"up_block_types": [
|
31 |
+
"UpBlock2D",
|
32 |
+
"CrossAttnUpBlock2D",
|
33 |
+
"CrossAttnUpBlock2D",
|
34 |
+
"CrossAttnUpBlock2D"
|
35 |
+
]
|
36 |
+
}
|
unet/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3ff603f7757f3efde7e92bac8d667e40c4c742634bee84bf906f3a2ff2a99b46
|
3 |
+
size 3438370637
|
vae/config.json
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "AutoencoderKL",
|
3 |
+
"_diffusers_version": "0.7.2",
|
4 |
+
"act_fn": "silu",
|
5 |
+
"block_out_channels": [
|
6 |
+
128,
|
7 |
+
256,
|
8 |
+
512,
|
9 |
+
512
|
10 |
+
],
|
11 |
+
"down_block_types": [
|
12 |
+
"DownEncoderBlock2D",
|
13 |
+
"DownEncoderBlock2D",
|
14 |
+
"DownEncoderBlock2D",
|
15 |
+
"DownEncoderBlock2D"
|
16 |
+
],
|
17 |
+
"in_channels": 3,
|
18 |
+
"latent_channels": 4,
|
19 |
+
"layers_per_block": 2,
|
20 |
+
"norm_num_groups": 32,
|
21 |
+
"out_channels": 3,
|
22 |
+
"sample_size": 512,
|
23 |
+
"up_block_types": [
|
24 |
+
"UpDecoderBlock2D",
|
25 |
+
"UpDecoderBlock2D",
|
26 |
+
"UpDecoderBlock2D",
|
27 |
+
"UpDecoderBlock2D"
|
28 |
+
]
|
29 |
+
}
|
vae/diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c8b4d738e8f8ecefacb7ace7e5bc25a4a7fec79606197183892e46cbb304a227
|
3 |
+
size 334713511
|