mrneuralnet commited on
Commit
083fb4c
1 Parent(s): e875957

Modify readme

Browse files
Files changed (1) hide show
  1. README.md +8 -82
README.md CHANGED
@@ -1,82 +1,8 @@
1
- ## <b>Detecting Photoshopped Faces by Scripting Photoshop</b> <br>[[Project Page]](http://peterwang512.github.io/FALdetector) [[Paper]](https://arxiv.org/abs/1906.05856)
2
-
3
- [Sheng-Yu Wang<sup>1</sup>](https://peterwang512.github.io/),
4
- [Oliver Wang<sup>2</sup>](http://www.oliverwang.info/),
5
- [Andrew Owens<sup>1</sup>](http://andrewowens.com/),
6
- [Richard Zhang<sup>2</sup>](https://richzhang.github.io/),
7
- [Alexei A. Efros<sup>1</sup>](https://people.eecs.berkeley.edu/~efros/). <br>
8
- UC Berkeley<sup>1</sup>, Adobe Research<sup>2</sup>. <br>
9
- In [ICCV, 2019](https://arxiv.org/abs/1906.05856).
10
-
11
-
12
- <img src='https://peterwang512.github.io/FALdetector/images/teaser.png' align="center" width=900>
13
-
14
- <b>9/30/2019 Update</b> The code and model weights have been updated to correspond to the v2 of our paper. Note that the global classifer architecture is changed from resnet-50 to drn-c-26.
15
-
16
- <b>1/19/2019 Update</b> Dataset for evaluation is released! The link is [here](https://drive.google.com/file/d/1qCnwdbXFTf96g_LP-g_h-0BQcypNDuWB/view).
17
-
18
- ## (0) Disclaimer
19
- Welcome! Computer vision algorithms often work well on some images, but fail on others. Ours is like this too. We believe our work is a significant step forward in detecting and undoing facial warping by image editing tools. However, there are still many hard cases, and this is by no means a solved problem.
20
-
21
- This is partly because our algorithm is trained on faces warped by the Face-aware Liquify tool in Photoshop, and will thus work well for these types of images, but not necessarily for others. We call this the "dataset bias" problem. Please see the paper for more details on this issue.
22
-
23
- While we trained our models with various data augmentation to be more robust to downstream operations such as resizing, jpeg compression and saturation/brightness changes, there are many other retouches (e.g. airbrushing) that can alter the low-level statistics of the images to make the detection a really hard one.
24
-
25
- Please enjoy our results and have fun trying out our models!
26
-
27
-
28
-
29
-
30
- ## (1) Setup
31
-
32
- ### Install packages
33
- - Install PyTorch ([pytorch.org](http://pytorch.org))
34
- - `pip install -r requirements.txt`
35
-
36
- ### Download model weights
37
- - Run `bash weights/download_weights.sh`
38
-
39
-
40
- ## (2) Run our models
41
-
42
- ### Global classifer
43
- ```
44
- python global_classifier.py --input_path examples/modified.jpg --model_path weights/global.pth
45
- ```
46
-
47
- ### Local Detector
48
- ```
49
- python local_detector.py --input_path examples/modified.jpg --model_path weights/local.pth --dest_folder out/
50
- ```
51
-
52
- **Note:** Our models are trained on faces cropped by the dlib CNN face detector. Although in both scripts we included the `--no_crop` option to run the models without face crops, it is used for images with already cropped faces.
53
-
54
- ## (3) Dataset
55
- A validation set consisting of 500 original and 500 modified images each from Flickr and OpenImage can be downloaded [here](https://drive.google.com/file/d/1qCnwdbXFTf96g_LP-g_h-0BQcypNDuWB/view). Due to licensing issues, the released validation set is different from the set we evaluate in the paper, and the training set will not be released.
56
-
57
- In the zip file, original faces are in the `original` folder, and modified faces are in the `modified` folder. For reference, the `reference` folder contains the same faces in the `modified` folder, but those are before modification (original).
58
-
59
- To evaluate the dataset, run:
60
- ```
61
- # Download the dataset
62
- cd data
63
- bash download_valset.sh
64
- cd ..
65
- # Run evaluation script. Model weights need to be downloaded.
66
- python eval.py --dataroot data --global_pth weights/global.pth --local_pth weights/local.pth --gpu_id 0
67
- ```
68
- The following are the models' performances on the released set:
69
-
70
- |Accuracy| AP |PSNR Increase|
71
- |:------:|:---:|:-----------:|
72
- | 93.9%|98.9%| +2.66|
73
-
74
-
75
-
76
- ## (A) Acknowledgments
77
-
78
- This repository borrows partially from the [pytorch-CycleGAN-and-pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix), [drn](https://github.com/fyu/drn), and the PyTorch [torchvision models](https://github.com/pytorch/vision/tree/master/torchvision/models) repositories.
79
-
80
- ## (B) Citation, Contact
81
-
82
- If you find this useful for your research, please consider citing this [bibtex](https://peterwang512.github.io/FALdetector/cite.txt). Please contact Sheng-Yu Wang \<sheng-yu_wang at berkeley dot edu\> with any comments or feedback.
 
1
+ title: P PD
2
+ emoji: ⚡
3
+ colorFrom: yellow
4
+ colorTo: purple
5
+ sdk: streamlit
6
+ sdk_version: 1.25.0
7
+ app_file: app.py
8
+ pinned: false