Antiraedus SupermanxKiaski commited on
Commit
f1c211d
β€’
1 Parent(s): 653ffae

Update README.md (#1)

Browse files

- Update README.md (1b4d096de39f16c44f617e297c6a35c8d75acb6a)


Co-authored-by: Ricardo Milos <SupermanxKiaski@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +86 -3
README.md CHANGED
@@ -1,3 +1,86 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Text2LIVE: Text-Driven Layered Image and Video Editing (ECCV 2022 - Oral)
2
+ ## [<a href="https://text2live.github.io/" target="_blank">Project Page</a>]
3
+
4
+ [![arXiv](https://img.shields.io/badge/arXiv-Text2LIVE-b31b1b.svg)](https://arxiv.org/abs/2204.02491)
5
+ ![Pytorch](https://img.shields.io/badge/PyTorch->=1.10.0-Red?logo=pytorch)
6
+ [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/text2live)
7
+
8
+ ![teaser](https://user-images.githubusercontent.com/22198039/179798581-ca6f6652-600a-400a-b21b-713fc5c15d56.png)
9
+
10
+ **Text2LIVE** is a method for text-driven editing of real-world images and videos, as described in <a href="https://arxiv.org/abs/2204.02491" target="_blank">(link to paper)</a>.
11
+
12
+ [//]: # (. It can be used for localized and global edits that change the texture of existing objects or augment the scene with semi-transparent effects &#40;e.g. smoke, fire, snow&#41;.)
13
+
14
+ [//]: # (### Abstract)
15
+ >We present a method for zero-shot, text-driven appearance manipulation in natural images and videos. Specifically, given an input image or video and a target text prompt, our goal is to edit the appearance of existing objects (e.g., object's texture) or augment the scene with new visual effects (e.g., smoke, fire) in a semantically meaningful manner. Our framework trains a generator using an internal dataset of training examples, extracted from a single input (image or video and target text prompt), while leveraging an external pre-trained CLIP model to establish our losses. Rather than directly generating the edited output, our key idea is to generate an edit layer (color+opacity) that is composited over the original input. This allows us to constrain the generation process and maintain high fidelity to the original input via novel text-driven losses that are applied directly to the edit layer. Our method neither relies on a pre-trained generator nor requires user-provided edit masks. Thus, it can perform localized, semantic edits on high-resolution natural images and videos across a variety of objects and scenes.
16
+
17
+
18
+ ## Getting Started
19
+ ### Installation
20
+
21
+ ```
22
+ git clone https://github.com/omerbt/Text2LIVE.git
23
+ conda create --name text2live python=3.9
24
+ conda activate text2live
25
+ pip install -r requirements.txt
26
+ ```
27
+
28
+ ### Download sample images and videos
29
+ Download sample images and videos from the DAVIS dataset:
30
+ ```
31
+ cd Text2LIVE
32
+ gdown https://drive.google.com/uc?id=1osN4PlPkY9uk6pFqJZo8lhJUjTIpa80J&export=download
33
+ unzip data.zip
34
+ ```
35
+ It will create a folder `data`:
36
+ ```
37
+ Text2LIVE
38
+ β”œβ”€β”€ ...
39
+ β”œβ”€β”€ data
40
+ β”‚ β”œβ”€β”€ pretrained_nla_models # NLA models are stored here
41
+ β”‚ β”œβ”€β”€ images # sample images
42
+ β”‚ └── videos # sample videos from DAVIS dataset
43
+ β”‚ β”œβ”€β”€ car-turn # contains video frames
44
+ β”‚ β”œβ”€β”€ ...
45
+ └── ...
46
+ ```
47
+ To enforce temporal consistency in video edits, we utilize the Neural Layered Atlases (NLA). Pretrained NLA models are taken from <a href="https://layered-neural-atlases.github.io">here</a>, and are already inside the `data` folder.
48
+
49
+ ### Run examples
50
+ * Our method is designed to change textures of existing objects / augment the scene with semi-transparent effects (e.g., smoke, fire). It is not designed for adding new objects or significantly deviating from the original spatial layout.
51
+ * Training **Text2LIVE** multiple times with the same inputs can lead to slightly different results.
52
+ * CLIP sometimes exhibits bias towards specific solutions (see figure 9 in the paper), thus slightly different text prompts may lead to different flavors of edits.
53
+
54
+
55
+ The required GPU memory depends on the input image/video size, but you should be good with a Tesla V100 32GB :).
56
+ Currently mixed precision introduces some instability in the training process, but it could be added later.
57
+
58
+ #### Video Editing
59
+ Run the following command to start training
60
+ ```
61
+ python train_video.py --example_config car-turn_winter.yaml
62
+ ```
63
+ #### Image Editing
64
+ Run the following command to start training
65
+ ```
66
+ python train_image.py --example_config golden_horse.yaml
67
+ ```
68
+ Intermediate results will be saved to `results` during optimization. The frequency of saving intermediate results is indicated in the `log_images_freq` flag of the configuration.
69
+
70
+ ## Sample Results
71
+ https://user-images.githubusercontent.com/22198039/179797381-983e0453-2e5d-40e8-983d-578217b358e4.mov
72
+
73
+ For more see the [supplementary material](https://text2live.github.io/sm/index.html).
74
+
75
+
76
+ ## Citation
77
+ ```
78
+ @inproceedings{bar2022text2live,
79
+ title={Text2live: Text-driven layered image and video editing},
80
+ author={Bar-Tal, Omer and Ofri-Amar, Dolev and Fridman, Rafail and Kasten, Yoni and Dekel, Tali},
81
+ booktitle={European Conference on Computer Vision},
82
+ pages={707--723},
83
+ year={2022},
84
+ organization={Springer}
85
+ }
86
+ ```