Update README.md
Browse files
README.md
CHANGED
@@ -5,99 +5,10 @@ license: apache-2.0
|
|
5 |
## Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creation
|
6 |
|
7 |
The *official* repository for [Accountable Textual-Visual Chat Learns to Reject
|
8 |
-
Human Instructions in Image Re-creation]().
|
9 |
|
10 |
-
|
11 |
|
12 |
-
|
13 |
|
14 |
-
- Python 3.8
|
15 |
-
- matplotlib == 3.1.1
|
16 |
-
- numpy == 1.19.4
|
17 |
-
- pandas == 0.25.1
|
18 |
-
- scikit_learn == 0.21.3
|
19 |
-
- torch == 1.8.0
|
20 |
-
|
21 |
-
### Installation
|
22 |
-
|
23 |
-
We provide an environment file; ``environment.yml`` containing the required dependencies. Clone the repo and run the following command in the root of this directory:
|
24 |
-
```
|
25 |
-
conda env create -f environment.yml
|
26 |
-
```
|
27 |
-
|
28 |
-
### Dataset
|
29 |
-
|
30 |
-
Please refer to [DOWNLOAD.md](data/DOWNLOAD.md) for dataset preparation.
|
31 |
-
|
32 |
-
|
33 |
-
### Pretrained Models
|
34 |
-
Please refer to [pretrained-models](pretrained-models/README.md) to download the released models.
|
35 |
-
|
36 |
-
|
37 |
-
### Train
|
38 |
-
|
39 |
-
#### Training commands
|
40 |
-
|
41 |
-
+ To train the first stage:
|
42 |
-
```shell
|
43 |
-
bash dist_train_vae.sh ${DATA_NAME} ${NODES} ${GPUS}
|
44 |
-
```
|
45 |
-
+ To train the second stage:
|
46 |
-
```shell
|
47 |
-
bash dist_train_atvc.sh ${VAE_PATH} ${DATA_NAME} ${NODES} ${GPUS}
|
48 |
-
```
|
49 |
-
|
50 |
-
#### Arguments
|
51 |
-
|
52 |
-
+ `${VAE_PATH}`: path of pretrained vae model.
|
53 |
-
+ `${DATA_NAME}`: dataset for training, e.g. `CLEVR-ATVC`, `Fruit-ATVC`.
|
54 |
-
+ `${NODES}`: number of node.
|
55 |
-
+ `${GPUS}`: number of gpus for each node.
|
56 |
-
|
57 |
-
|
58 |
-
### Test
|
59 |
-
|
60 |
-
#### Testing commands
|
61 |
-
|
62 |
-
+ To test image reconstruction ability of the first stage:
|
63 |
-
```shell
|
64 |
-
bash gen_vae.sh ${GPU} ${VAE_PATH} ${IMAGE_PATH}
|
65 |
-
```
|
66 |
-
+ To test atvc final model:
|
67 |
-
```shell
|
68 |
-
bash gen_atvc.sh ${GPU} ${ATVC_PATH} ${TEXT_QUERY} ${IMAGE_PATH}
|
69 |
-
```
|
70 |
-
|
71 |
-
#### Arguments
|
72 |
-
|
73 |
-
+ `${GPU}`: id of one gpu, e.g. `0`.
|
74 |
-
+ `${VAE_PATH}`: path of pretrained vae model.
|
75 |
-
+ `${IMAGE_PATH}`: image path for reconstrction, e.g. `input.png`.
|
76 |
-
+ `${ATVC_PATH}`: path of pretrained atvc model.
|
77 |
-
+ `${TEXT_QUERY}`: text-based query, e.g. `"Please put the small blue cube on top of the small yellow cylinder."`.
|
78 |
-
|
79 |
-
|
80 |
-
### License
|
81 |
-
|
82 |
-
`ATVC` is released under the [Apache 2.0 license](LICENSE).
|
83 |
-
|
84 |
-
|
85 |
-
### Citation
|
86 |
-
If you find this code useful for your research, please cite our paper
|
87 |
-
```
|
88 |
-
@article{zhang2023accountable,
|
89 |
-
title={Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creation},
|
90 |
-
author={Zhang, Zhiwei and Liu, Yuliang},
|
91 |
-
journal={arXiv preprint arXiv:2303.05983},
|
92 |
-
year={2023}
|
93 |
-
}
|
94 |
-
```
|
95 |
-
|
96 |
-
## Acknowledgement
|
97 |
-
|
98 |
-
Our code is learned from [DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch) and [CLIP](https://github.com/openai/CLIP). We would like to thank all the people who help label text-image pairs and participate in human evaluation experiments. We hope our explorations and findings contribute valuable insights regarding the accountability of textual-visual generative models.
|
99 |
-
|
100 |
-
## Contact
|
101 |
-
|
102 |
-
This project is developed by Zhiwei Zhang ([@zzw-zwzhang](https://github.com/zzw-zwzhang)) and Yuliang Liu ([@Yuliang-Liu](https://github.com/Yuliang-Liu)).
|
103 |
|
|
|
5 |
## Accountable Textual-Visual Chat Learns to Reject Human Instructions in Image Re-creation
|
6 |
|
7 |
The *official* repository for [Accountable Textual-Visual Chat Learns to Reject
|
8 |
+
Human Instructions in Image Re-creation](https://arxiv.org/pdf/2303.05983.pdf).
|
9 |
|
10 |
+
[Project Page](https://matrix-alpha.github.io/)
|
11 |
|
12 |
+
![ The overall framework of ATVC.](atvc.png)
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|