radames commited on
Commit
5499076
1 Parent(s): 38247fb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # User-Controllable Latent Transformer for StyleGAN Image Layout Editing
6
+
7
+ Yuki Endo: "User-Controllable Latent Transformer for StyleGAN Image Layout Editing," Computer Graphpics Forum (Pacific Graphics 2022) [[Project](http://www.cgg.cs.tsukuba.ac.jp/~endo/projects/UserControllableLT)] [[PDF (preprint)](http://arxiv.org/abs/2208.12408)]
8
+
9
+ ## Prerequisites
10
+ 1. Python 3.8
11
+ 2. PyTorch 1.9.0
12
+ 3. Flask
13
+ 4. Others (see env.yml)
14
+
15
+ ## Preparation
16
+ Download and decompress <a href="https://drive.google.com/file/d/1lBL_J-uROvqZ0BYu9gmEcMCNyaPo9cBY/view?usp=sharing">our pre-trained models</a>.
17
+
18
+ ## Inference with our pre-trained models
19
+ We provide an interactive interface based on Flask. This interface can be locally launched with
20
+ ```
21
+ python interface/flask_app.py --checkpoint_path=pretrained_models/latent_transformer/cat.pt
22
+ ```
23
+ The interface can be accessed via http://localhost:8000/.
24
+
25
+ ## Training
26
+ The latent transformer can be trained with
27
+ ```
28
+ python scripts/train.py --exp_dir=results --stylegan_weights=pretrained_models/stylegan2-cat-config-f.pt
29
+ ```
30
+ To perform training with your dataset, you need first to train StyleGAN2 on your dataset using [rosinality's code](https://github.com/rosinality/stylegan2-pytorch) and then run the above script with specifying the trained weights.
31
+
32
+ ## Citation
33
+ Please cite our paper if you find the code useful:
34
+ ```
35
+ @Article{endoPG2022,
36
+ Title = {User-Controllable Latent Transformer for StyleGAN Image Layout Editing},
37
+ Author = {Yuki Endo},
38
+ Journal = {Computer Graphics Forum},
39
+ volume = {41},
40
+ number = {7},
41
+ pages = {395-406},
42
+ doi = {10.1111/cgf.14686},
43
+ Year = {2022}
44
+ }
45
+ ```
46
+
47
+ ## Acknowledgements
48
+ This code heavily borrows from the [pixel2style2pixel](https://github.com/eladrich/pixel2style2pixel) and [expansion](https://github.com/gengshan-y/expansion) repositories.