Text-to-Image
Diffusers
Safetensors
English
mhdang commited on
Commit
5723975
1 Parent(s): 94bfe24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: text-to-image
8
  ---
9
  # Diffusion Model Alignment Using Direct Preference Optimization
10
 
11
- ![row01](01.png)
12
 
13
  Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
14
 
 
8
  ---
9
  # Diffusion Model Alignment Using Direct Preference Optimization
10
 
11
+ ![row01](01.gif)
12
 
13
  Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
14