Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
# Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models (CVPR 2024)
|
5 |
+
|
6 |
+
This repository contains the official checkpoints of StoryGen: https://arxiv.org/abs/2306.00973/
|
7 |
+
|
8 |
+
For code retails, please refer to our Github repository: [StoryGen Code](https://github.com/haoningwu3639/StoryGen)
|
9 |
+
|
10 |
+
## Some Information
|
11 |
+
[Project Page](https://haoningwu3639.github.io/StoryGen_Webpage/) $\cdot$ [Paper](https://arxiv.org/abs/2306.00973/) $\cdot$ [Dataset](https://huggingface.co/datasets/haoningwu/StorySalon) $\cdot$ [Checkpoint](https://huggingface.co/haoningwu/StoryGen)
|
12 |
+
|
13 |
+
## Requirements
|
14 |
+
- Python >= 3.8 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
|
15 |
+
- [PyTorch >= 1.12](https://pytorch.org/)
|
16 |
+
- xformers == 0.0.13
|
17 |
+
- diffusers == 0.13.1
|
18 |
+
- accelerate == 0.17.1
|
19 |
+
- transformers == 4.27.4
|
20 |
+
|
21 |
+
A suitable [conda](https://conda.io/) environment named `storygen` can be created
|
22 |
+
and activated with:
|
23 |
+
|
24 |
+
```
|
25 |
+
conda env create -f environment.yaml
|
26 |
+
conda activate storygen
|
27 |
+
```
|
28 |
+
|
29 |
+
## Meta Data Preparation
|
30 |
+
#### Data from YouTube
|
31 |
+
We provide the metadata of our StorySalon dataset in `./data/metadata.json`. It includes the id, name, url, duration and the keyframe list after filtering the videos.
|
32 |
+
|
33 |
+
To download these videos, we recommend to use [youtube-dl](https://github.com/yt-dlp/yt-dlp) via:
|
34 |
+
```
|
35 |
+
youtube-dl --write-auto-sub -o 'file\%(title)s.%(ext)s' -f 135 [url]
|
36 |
+
```
|
37 |
+
|
38 |
+
The keyframes extracted with the following data processing pipeline (step 1) can be filtered according to the keyframe list provided in the metadata to avoid manual selection.
|
39 |
+
|
40 |
+
The corresponding masks, story-level description and visual description can be extracted with the following data processing pipeline or downloaded from [here](https://huggingface.co/datasets/haoningwu/StorySalon).
|
41 |
+
|
42 |
+
#### Data from Open-source Libraries
|
43 |
+
For the open-source PDF data, you can directly download the frames, corresponding masks, description and narrative from [StorySalon](https://huggingface.co/datasets/haoningwu/StorySalon).
|
44 |
+
|
45 |
+
## Data Processing Pipeline
|
46 |
+
The data processing pipeline includes several necessary steps:
|
47 |
+
- Extract the keyframes and their corresponding subtitles;
|
48 |
+
- Detect and remove duplicate frames;
|
49 |
+
- Segment text, people, and headshots in images; and remove frames that only contain real people;
|
50 |
+
- Inpaint the text, headshots and real hands in the frames according to the segmentation mask;
|
51 |
+
- (Optional) Use Caption model combined with subtitles to generate a description of each image.
|
52 |
+
|
53 |
+
The keyframes and their corresponding subtitles can be extracted via:
|
54 |
+
```
|
55 |
+
python ./data_process/extract.py
|
56 |
+
```
|
57 |
+
|
58 |
+
The duplicate frames can be detected and removed via:
|
59 |
+
```
|
60 |
+
CUDA_VISIBLE_DEVICES=0 python ./data_process/dup_remove.py
|
61 |
+
```
|
62 |
+
|
63 |
+
The text, people and headshots can be segmented, and the frames that only contain real people are then removed via:
|
64 |
+
```
|
65 |
+
python ./data_process/yolov7/human_ocr_mask.py
|
66 |
+
```
|
67 |
+
|
68 |
+
The text, headshots and real hands in the frames can be inpainted with [SDM-Inpainting](https://github.com/CompVis/stable-diffusion), according to the segmentation mask via:
|
69 |
+
```
|
70 |
+
CUDA_VISIBLE_DEVICES=0 python ./data_process/SDM/inpaint.py
|
71 |
+
```
|
72 |
+
|
73 |
+
Besides, we also provide the code to get story-level paired image-text samples.
|
74 |
+
We can align the subtitles with visual frames by using Dynamic Time Warping(DTW) algorithm via:
|
75 |
+
```
|
76 |
+
CUDA_VISIBLE_DEVICES=0 python ./data_process/align.py
|
77 |
+
```
|
78 |
+
|
79 |
+
(Optional) You can use [TextBind](https://github.com/SihengLi99/TextBind) or [MiniGPT-v2](https://github.com/Vision-CAIR/MiniGPT-4) to obtain the caption of each image via:
|
80 |
+
```
|
81 |
+
CUDA_VISIBLE_DEVICES=0 python ./data_process/TextBind/main_caption.py
|
82 |
+
CUDA_VISIBLE_DEVICES=0 python ./data_process/MiniGPT-v2/main_caption.py
|
83 |
+
```
|
84 |
+
|
85 |
+
(Discarded) Previous method: You can also use [ChatCaptioner](https://github.com/Vision-CAIR/ChatCaptioner/tree/main/ChatCaptioner) to obtain the caption of each image via:
|
86 |
+
```
|
87 |
+
CUDA_VISIBLE_DEVICES=0 python ./data_process/ChatCaptioner/main_caption.py
|
88 |
+
```
|
89 |
+
|
90 |
+
For a more detailed introduction to the data processing pipeline, please refer to `./data_process/README.md` and our paper.
|
91 |
+
|
92 |
+
## Training
|
93 |
+
Before training, please download pre-trained StableDiffusion-1.5 from [SDM](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main) (including vae, scheduler, tokenizer and unet). Then, all the pre-trained checkpoints should be placed into the corresponding location in the folder `./ckpt/stable-diffusion-v1-5/`
|
94 |
+
|
95 |
+
For Stage 1, pre-train the self-attention layers in SDM for StyleTransfer via:
|
96 |
+
```
|
97 |
+
CUDA_VISIBLE_DEVICES=0 accelerate launch train_StorySalon_stage1.py
|
98 |
+
```
|
99 |
+
|
100 |
+
For Stage 2, train the Visual-Language Context Module via:
|
101 |
+
|
102 |
+
```
|
103 |
+
CUDA_VISIBLE_DEVICES=0 accelerate launch train_StorySalon_stage2.py
|
104 |
+
```
|
105 |
+
|
106 |
+
For replicating the experiments on MS-COCO, train via:
|
107 |
+
|
108 |
+
```
|
109 |
+
CUDA_VISIBLE_DEVICES=0 accelerate launch train_COCO.py
|
110 |
+
```
|
111 |
+
|
112 |
+
If you have multiple GPUs to accelerate the training process, you can use:
|
113 |
+
```
|
114 |
+
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch --multi_gpu train_StorySalon_stage2.py
|
115 |
+
```
|
116 |
+
|
117 |
+
## Inference
|
118 |
+
```
|
119 |
+
CUDA_VISIBLE_DEVICES=0 accelerate launch inference.py
|
120 |
+
```
|
121 |
+
|
122 |
+
## TODO
|
123 |
+
- [x] Model & Training & Inference Code
|
124 |
+
- [x] Dataset Processing Pipeline
|
125 |
+
- [x] Meta Data
|
126 |
+
- [x] Code Update
|
127 |
+
- [x] Release Checkpoints
|
128 |
+
- [x] Data Update
|
129 |
+
|
130 |
+
## License
|
131 |
+
The code and checkpoints in this repository are under MIT license.
|
132 |
+
The open-source books in the StorySalon dataset come from multiple online open-source libraries (please refer to the Appendix of our paper for more details), and they are all under CC-BY 4.0 license.
|
133 |
+
It should be noted that, for the data extracted from the video data, we only provide YouTube URLs and data processing pipelines, if you wish to use them for commercial purposes, we recommend that you obey the relevant regulations of YouTube.
|
134 |
+
|
135 |
+
## Citation
|
136 |
+
If you use this code for your research or project, please cite:
|
137 |
+
|
138 |
+
@inproceedings{liu2024intelligent,
|
139 |
+
title = {Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion Models},
|
140 |
+
author = {Chang Liu, Haoning Wu, Yujie Zhong, Xiaoyun Zhang, Yanfeng Wang, Weidi Xie},
|
141 |
+
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
142 |
+
year = {2024},
|
143 |
+
}
|
144 |
+
|
145 |
+
## Acknowledgements
|
146 |
+
Many thanks to the code bases from [diffusers](https://github.com/huggingface/diffusers) and [SimpleSDM](https://github.com/haoningwu3639/SimpleSDM).
|
147 |
+
|
148 |
+
## Contact
|
149 |
+
If you have any questions, please feel free to contact haoningwu3639@gmail.com or liuchang666@sjtu.edu.cn.
|