baofff commited on
Commit
9656653
1 Parent(s): ce7c554

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - text-to-image
5
+ - image-to-text
6
+ - image-captioning
7
+ - image-variation
8
+ - text-variation
9
+ - multi-modality
10
+ - generative model
11
+ ---
12
+
13
+ UniDiffuser is a multi-modal diffusion model with a transformer-based backbone ([U-ViT](https://github.com/baofff/U-ViT)). UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead.
14
+
15
+
16
+
17
+ The main component of UniDiffuser is [U-ViT](https://github.com/baofff/U-ViT), which parameterizes the joint noise prediction network. Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from [Stable Diffusion](https://github.com/CompVis/stable-diffusion), a pretrained [image ViT-B/32 CLIP encoder](https://github.com/openai/CLIP), a pretrained [text ViT-L CLIP encoder](https://huggingface.co/openai/clip-vit-large-patch14), and a [GPT-2](https://github.com/openai/gpt-2) text decoder finetuned by ourselves.
18
+
19
+
20
+ We provide two versions of UniDiffuser:
21
+ - [UniDiffuser-v0](https://huggingface.co/thu-ml/unidiffuser-v0): This version is trained on [LAION-5B](https://laion.ai/), which contains noisy webdata of text-image pairs.
22
+ - [UniDiffuser-v1](https://huggingface.co/thu-ml/unidiffuser-v1): This version is resumed from UniDiffuser-v0, and is further trained with a set of less noisy internal text-image pairs. It uses a flag as its input to distinguish webdata and internal data during training.
23
+
24
+
25
+ ## Download
26
+ We provide files for UniDiffuser-v0 in [this link](https://huggingface.co/thu-ml/unidiffuser-v0/tree/main), and files for UniDiffuser-v1 in [this link](https://huggingface.co/thu-ml/unidiffuser-v1/tree/main).
27
+ These files are:
28
+ - `autoencoder_kl.pth` is the weight of the image autoencoder converted from [Stable Diffusion](https://github.com/CompVis/stable-diffusion).
29
+ - `caption_decoder.pth` is the weight of the finetuned GPT-2 text decoder.
30
+ - `uvit_v0.pth/uvit_v1.pth` is the weight of U-ViT for UniDiffuser-v0/UniDiffuser-v1.
31
+
32
+ Note that UniDiffuser-v0 and UniDiffuser-v1 share the same `autoencoder_kl.pth` and `caption_decoder.pth`. You only need to download them once.
33
+ As for other components, they will be automatically downloaded.
34
+
35
+
36
+ ## Usage
37
+ Use the model with [UniDiffuser codebase](https://github.com/thu-ml/unidiffuser).
38
+
39
+
40
+ ## Model Details
41
+ - **Model type:** Diffusion-based multi-modal generation model
42
+ - **Language(s):** English
43
+ - **License:** MIT
44
+ - **Model Description:** This is a model that can perform image, text, text-to-image, image-to-text, and image-text pair generation. Its main component is a [U-ViT](https://github.com/baofff/U-ViT), which parameterizes the joint noise prediction network. Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from [Stable Diffusion](https://github.com/CompVis/stable-diffusion), a pretrained [image ViT-B/32 CLIP encoder](https://github.com/openai/CLIP), a pretrained [text ViT-L CLIP encoder](https://huggingface.co/openai/clip-vit-large-patch14), and a [GPT-2](https://github.com/openai/gpt-2) text decoder finetuned by ourselves.
45
+ - **Resources for more information:** [GitHub Repository](https://github.com/thu-ml/unidiffuser), [Paper]().
46
+
47
+
48
+ ## Direct Use
49
+
50
+ _Note: This section is taken from the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), but applies in the same way to UniDiffuser_.
51
+
52
+
53
+ The model is intended for research purposes only. Possible research areas and tasks include
54
+
55
+ - Safe deployment of models which have the potential to generate harmful content.
56
+ - Probing and understanding the limitations and biases of generative models.
57
+ - Generation of artworks and use in design and other artistic processes.
58
+ - Applications in educational or creative tools.
59
+ - Research on generative models.
60
+
61
+ Excluded uses are described below.
62
+
63
+ ### Misuse, Malicious Use, and Out-of-Scope Use
64
+
65
+
66
+ The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
67
+ #### Out-of-Scope Use
68
+ The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
69
+ #### Misuse and Malicious Use
70
+ Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
71
+
72
+ - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
73
+ - Intentionally promoting or propagating discriminatory content or harmful stereotypes.
74
+ - Impersonating individuals without their consent.
75
+ - Sexual content without consent of the people who might see it.
76
+ - Mis- and disinformation
77
+ - Representations of egregious violence and gore
78
+ - Sharing of copyrighted or licensed material in violation of its terms of use.
79
+ - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.