Update README.md
Browse files
README.md
CHANGED
@@ -1,152 +1,18 @@
|
|
1 |
# SPIGA: Shape Preserving Facial Landmarks with Graph Attention Networks.
|
2 |
|
3 |
-
[![Project Page](https://badgen.net/badge/color/Project%20Page/purple?icon=atom&label)](https://bmvc2022.mpi-inf.mpg.de/155/)
|
4 |
-
[![arXiv](https://img.shields.io/badge/arXiv-2210.07233-b31b1b.svg)](https://arxiv.org/abs/2210.07233)
|
5 |
-
[![PyPI version](https://badge.fury.io/py/spiga.svg)](https://badge.fury.io/py/spiga)
|
6 |
-
[![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](LICENSE)
|
7 |
-
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/andresprados/SPIGA/blob/main/colab_tutorials/video_demo.ipynb)
|
8 |
|
9 |
-
This repository contains the
|
10 |
|
11 |
<p align="center">
|
12 |
<img src="https://raw.githubusercontent.com/andresprados/SPIGA/main/assets/spiga_scheme.png" width="80%">
|
13 |
</p>
|
14 |
|
15 |
-
**It achieves top-performing results in:**
|
16 |
-
|
17 |
-
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/pose-estimation-on-300w-full)](https://paperswithcode.com/sota/pose-estimation-on-300w-full?p=shape-preserving-facial-landmarks-with-graph)
|
18 |
-
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/head-pose-estimation-on-wflw)](https://paperswithcode.com/sota/head-pose-estimation-on-wflw?p=shape-preserving-facial-landmarks-with-graph)
|
19 |
-
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/pose-estimation-on-merl-rav)](https://paperswithcode.com/sota/pose-estimation-on-merl-rav?p=shape-preserving-facial-landmarks-with-graph)
|
20 |
-
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-merl-rav)](https://paperswithcode.com/sota/face-alignment-on-merl-rav?p=shape-preserving-facial-landmarks-with-graph)
|
21 |
-
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-wflw)](https://paperswithcode.com/sota/face-alignment-on-wflw?p=shape-preserving-facial-landmarks-with-graph)
|
22 |
-
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-300w-split-2)](https://paperswithcode.com/sota/face-alignment-on-300w-split-2?p=shape-preserving-facial-landmarks-with-graph)
|
23 |
-
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-cofw-68)](https://paperswithcode.com/sota/face-alignment-on-cofw-68?p=shape-preserving-facial-landmarks-with-graph)
|
24 |
-
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/shape-preserving-facial-landmarks-with-graph/face-alignment-on-300w)](https://paperswithcode.com/sota/face-alignment-on-300w?p=shape-preserving-facial-landmarks-with-graph)
|
25 |
-
|
26 |
|
27 |
## Setup
|
28 |
-
The repository
|
29 |
-
To run the video analyzer demo or evaluate the algorithm, install the repository from the source code:
|
30 |
-
|
31 |
-
```
|
32 |
-
# Best practices:
|
33 |
-
# 1. Create a virtual environment.
|
34 |
-
# 2. Install Pytorch according to your CUDA version.
|
35 |
-
# 3. Install SPIGA from source code:
|
36 |
-
|
37 |
-
git clone https://github.com/andresprados/SPIGA.git
|
38 |
-
cd spiga
|
39 |
-
pip install -e .
|
40 |
-
|
41 |
-
# To run the video analyzer demo install the extra requirements.
|
42 |
-
pip install -e .[demo]
|
43 |
-
```
|
44 |
-
**Models:** By default, model weights are automatically downloaded on demand and stored at ```./spiga/models/weights/```.
|
45 |
-
You can also download them from [Google Drive](https://drive.google.com/drive/folders/1olrkoiDNK_NUCscaG9BbO3qsussbDi7I?usp=sharing).
|
46 |
-
|
47 |
-
***Note:*** All the callable files provide a detailed parser that describes the behaviour of the program and their inputs. Please, check the operational modes by using the extension ```--help```.
|
48 |
-
|
49 |
-
## Inference and Demo
|
50 |
-
We provide an inference framework for SPIGA available at ```./spiga/inference```. The models can be easily deployed
|
51 |
-
in third-party projects by adding a few lines of code. Check out our inference and application tutorials
|
52 |
-
for more information:
|
53 |
-
|
54 |
-
<div align="center">
|
55 |
-
|
56 |
-
Tutorials | Notebook |
|
57 |
-
:---: | :---: |
|
58 |
-
Image Inference Example | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/andresprados/SPIGA/blob/main/colab_tutorials/image_demo.ipynb) |
|
59 |
-
Face Video Analyzer Demo | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/andresprados/SPIGA/blob/main/colab_tutorials/video_demo.ipynb) |
|
60 |
-
|
61 |
-
</div>
|
62 |
-
|
63 |
-
### Face Video Analyzer Demo:
|
64 |
-
The demo application provides a general framework for tracking, detecting and extracting features of human faces in images or videos.
|
65 |
-
You can use the following commands to run the demo:
|
66 |
-
|
67 |
-
```
|
68 |
-
python ./spiga/demo/app.py \
|
69 |
-
[--input] \ # Webcam ID or Video Path. Dft: Webcam '0'.
|
70 |
-
[--dataset] \ # SPIGA pretrained weights per dataset. Dft: 'wflw'.
|
71 |
-
[--tracker] \ # Tracker name. Dft: 'RetinaSort'.
|
72 |
-
[--show] \ # Select the attributes of the face to be displayed. Dft: ['fps', 'face_id', 'landmarks', 'headpose']
|
73 |
-
[--save] \ # Save record.
|
74 |
-
[--noview] \ # Do not visualize window.
|
75 |
-
[--outpath] \ # Recorded output directory. Dft: './spiga/demo/outputs'
|
76 |
-
[--fps] \ # Frames per second.
|
77 |
-
[--shape] \ # Visualizer shape (W,H).
|
78 |
-
```
|
79 |
-
|
80 |
-
|
81 |
-
<p align="center">
|
82 |
-
|
83 |
-
<img src="https://raw.githubusercontent.com/andresprados/SPIGA/main/assets/demo.gif" width=250px height=250px>
|
84 |
-
|
85 |
-
<img src="https://raw.githubusercontent.com/andresprados/SPIGA/main/assets/results/carnaval.gif" width=300px height=250px>
|
86 |
-
|
87 |
-
<img src="https://raw.githubusercontent.com/andresprados/SPIGA/main/assets/results/football.gif" width=230px height=250px>
|
88 |
-
|
89 |
-
</p>
|
90 |
-
|
91 |
-
***Note:*** For more information check the [Demo Readme](spiga/demo/readme.md) or call the app parser ```--help```.
|
92 |
-
|
93 |
-
|
94 |
-
## Dataloaders and Benchmarks
|
95 |
-
This repository provides general-use tools for the task of face alignment and headpose estimation:
|
96 |
-
|
97 |
-
* **Dataloaders:** Training and inference dataloaders are available at ```./spiga/data```.
|
98 |
-
Including the data augmentation tools used for training SPIGA and data-visualizer to analyze the dataset images and features.
|
99 |
-
For more information check the [Data Readme](spiga/data/readme.md) .
|
100 |
-
|
101 |
-
* **Benchmark:** A common benchmark framework to test any algorithm in the task of face alignment and headpose estimation
|
102 |
-
is available at ```./spiga/eval/benchmark```. For more information check the following Evaluation Section and the [Benchmark Readme](spiga/eval/benchmark/readme.md).
|
103 |
-
|
104 |
-
**Datasets:** To run the data visualizers or the evaluation benchmark please download the dataset images from the official websites
|
105 |
-
([300W](https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/),
|
106 |
-
[AFLW](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/),
|
107 |
-
[WFLW](https://wywu.github.io/projects/LAB/WFLW.html), [COFW](http://www.vision.caltech.edu/xpburgos/ICCV13/)).
|
108 |
-
By default they should be saved following the next folder structure:
|
109 |
-
```
|
110 |
-
./spiga/data/databases/ # Default path can be updated by modifying 'db_img_path' in ./spiga/data/loaders/dl_config.py
|
111 |
-
|
|
112 |
-
└───/300w
|
113 |
-
│ └─── /images
|
114 |
-
│ | /private
|
115 |
-
│ | /test
|
116 |
-
| └ /train
|
117 |
-
|
|
118 |
-
└───/cofw
|
119 |
-
│ └─── /images
|
120 |
-
|
|
121 |
-
└───/aflw
|
122 |
-
│ └─── /data
|
123 |
-
| └ /flickr
|
124 |
-
|
|
125 |
-
└───/wflw
|
126 |
-
└─── /images
|
127 |
-
```
|
128 |
-
**Annotations:** We have stored for simplicity the datasets annotations directly in ```./spiga/data/annotations```. We strongly recommend to move them out of the repository if you plan to use it as a git directory.
|
129 |
-
|
130 |
-
**Results:** Similar to the annotations problem, we have stored the SPIGA results in ```./spiga/eval/results/<dataset_name>```. Remove them if need it.
|
131 |
-
|
132 |
-
## Evaluation
|
133 |
-
The models evaluation is divided in two scripts:
|
134 |
-
|
135 |
-
**Results generation**: The script extracts the data alignments and headpose estimation from the desired ``` <dataset_name>``` trained network. Generating a ```./spiga/eval/results/results_<dataset_name>_test.json``` file which follows the same data structure defined by the dataset annotations.
|
136 |
-
|
137 |
-
```
|
138 |
-
python ./spiga/eval/results_gen.py <dataset_name>
|
139 |
-
```
|
140 |
-
|
141 |
-
**Benchmark metrics**: The script generates the desired landmark or headpose estimation metrics. We have implemented an useful benchmark which allows you to test any model using a results file as input.
|
142 |
-
|
143 |
-
```
|
144 |
-
python ./spiga/eval/benchmark/evaluator.py /path/to/<results_file.json> --eval lnd pose -s
|
145 |
-
```
|
146 |
|
147 |
-
|
148 |
|
149 |
-
### Results Sum-up
|
150 |
<details>
|
151 |
<summary> WFLW Dataset </summary>
|
152 |
|
|
|
1 |
# SPIGA: Shape Preserving Facial Landmarks with Graph Attention Networks.
|
2 |
|
|
|
|
|
|
|
|
|
|
|
3 |
|
4 |
+
This repository contains the models weights of **SPIGA, a face alignment and headpose estimator** that takes advantage of the complementary benefits from CNN and GNN architectures producing plausible face shapes in presence of strong appearance changes.
|
5 |
|
6 |
<p align="center">
|
7 |
<img src="https://raw.githubusercontent.com/andresprados/SPIGA/main/assets/spiga_scheme.png" width="80%">
|
8 |
</p>
|
9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
## Setup
|
12 |
+
The repository is available on [github](https://github.com/andresprados/SPIGA.git)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
+
## Results
|
15 |
|
|
|
16 |
<details>
|
17 |
<summary> WFLW Dataset </summary>
|
18 |
|