yangkaiSIGS
commited on
Commit
•
79293b9
1
Parent(s):
ed32d7d
Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,9 @@
|
|
1 |
# Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)
|
2 |
|
3 |
-
**Description**:
|
4 |
-
|
5 |
|
6 |
-
**Source Code**: The code used to generate this data can be found [here](https://github.com/yk7333/D3PO/
|
7 |
|
8 |
**Directory**
|
9 |
- d3po_dataset
|
|
|
1 |
# Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)
|
2 |
|
3 |
+
**Description**: This repository contains the dataset for the D3PO method in this paper [Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model](https://arxiv.org/abs/2311.13231). The *d3po_dataset* file pertains to the image distortion experiment of the [`anything-v5`](https://huggingface.co/stablediffusionapi/anything-v5) model.
|
4 |
+
The *text2img_dataset* comprises the images generated from the pretrained, preferred image fine-tuned, reward weighted fine-tuned and D3PO fine-tuned models in the prompt-image alignment experiment.
|
5 |
|
6 |
+
**Source Code**: The code used to generate this data can be found [here](https://github.com/yk7333/D3PO/).
|
7 |
|
8 |
**Directory**
|
9 |
- d3po_dataset
|