yichaodu commited on
Commit
f23a498
1 Parent(s): dd07aac

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - stable-diffusion
4
+ - stable-diffusion-diffusers
5
+ - text-to-image
6
+ - DDPO
7
+ inference: true
8
+ ---
9
+
10
+ # Aligned Diffusion Model via DDPO
11
+
12
+ Diffusion Model aligned with the following reward models and Denoising Diffusion Policy Optimization (DDPO) algorithm
13
+ ```
14
+ close-sourced vlm: claude3-opus gpt-4o gpt-4v
15
+ ```
16
+
17
+ ## How to Use
18
+
19
+ You can load the model and perform inference as follows:
20
+ ```python
21
+ from diffusers import StableDiffusionPipeline, UNet2DConditionModel
22
+
23
+ pretrained_model_name = "runwayml/stable-diffusion-v1-5"
24
+ pipeline = StableDiffusionPipeline.from_pretrained(pretrained_model_name, torch_dtype=torch.float16)
25
+
26
+ lora_path = os.path.join(""path/to/checkpoint"")
27
+ pipeline.sd_pipeline.load_lora_weights(lora_path)
28
+ pipeline.sd_pipeline.to("cuda")
29
+
30
+ generator = torch.Generator(device='cuda')
31
+ generator = generator.manual_seed(1)
32
+
33
+ prompt = "a pink flower"
34
+
35
+ image = pipeline(prompt=prompt, generator=generator, guidance_scale=5).images[0]
36
+ ```
37
+
38
+ ## Citation
39
+ ```
40
+ @misc{mjbench2024mjbench,
41
+ title={MJ-BENCH: Is Your Multimodal Reward Model Really a Good Judge?},
42
+ author={Chen*, Zhaorun and Du*, Yichao and Wen*, Zichen and Zhou*, Yiyang and Cui, Chenhang and Weng, Zhenzhen and Tu, Haoqin and Wang, Chaoqi and Tong, Zhengwei and HUANG, Leria and Chen, Canyu and Ye, Qinghao and Zhu, Zhihong and Zhang, Yuqing and Zhou, Jiawei and Zhao, Zhuokai and Rafailov, Rafael and Finn, Chelsea and Yao, Huaxiu},
43
+ year={2024}
44
+ }
45
+ ```