yangkaiSIGS commited on
Commit
79293b9
1 Parent(s): ed32d7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -1,9 +1,9 @@
1
  # Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)
2
 
3
- **Description**: The dataset for the image distortion experiment of the [`anything-v5`](https://huggingface.co/stablediffusionapi/anything-v5) model in the paper [Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model](https://arxiv.org/abs/2311.13231).
4
- (2024.1.22 Update: Add the dataset for evaluating text and image alignment before and after fine-tuning.)
5
 
6
- **Source Code**: The code used to generate this data can be found [here](https://github.com/yk7333/D3PO/tree/main).
7
 
8
  **Directory**
9
  - d3po_dataset
 
1
  # Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)
2
 
3
+ **Description**: This repository contains the dataset for the D3PO method in this paper [Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model](https://arxiv.org/abs/2311.13231). The *d3po_dataset* file pertains to the image distortion experiment of the [`anything-v5`](https://huggingface.co/stablediffusionapi/anything-v5) model.
4
+ The *text2img_dataset* comprises the images generated from the pretrained, preferred image fine-tuned, reward weighted fine-tuned and D3PO fine-tuned models in the prompt-image alignment experiment.
5
 
6
+ **Source Code**: The code used to generate this data can be found [here](https://github.com/yk7333/D3PO/).
7
 
8
  **Directory**
9
  - d3po_dataset