|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- detection-datasets/coco |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
tags: |
|
- RyzenAI |
|
- pose estimation |
|
--- |
|
|
|
# MoveNet |
|
|
|
MoveNet is an ultra fast and accurate model that detects 17 keypoints of a body. It released in [movenet.pytorch](https://github.com/fire717/movenet.pytorch/blob/master/README.md?plain=1) |
|
|
|
|
|
We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/). |
|
|
|
|
|
|
|
## How to use |
|
|
|
### Installation |
|
|
|
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI. |
|
Run the following script to install pre-requisites for this model. |
|
```bash |
|
pip install -r requirements.txt |
|
``` |
|
|
|
|
|
### Data Preparation (optional: for accuracy evaluation) |
|
|
|
1.Download COCO dataset2017 from https://cocodataset.org/. (You need train2017.zip, val2017.zip and annotations.)Unzip to `./data/` like this: |
|
|
|
``` |
|
βββ data |
|
βββ annotations (person_keypoints_train2017.json, person_keypoints_val2017.json, ...) |
|
βββ train2017 (xx.jpg, xx.jpg,...) |
|
βββ val2017 (xx.jpg, xx.jpg,...) |
|
|
|
``` |
|
|
|
|
|
2.Make data to our data format. |
|
- Modify the path in line 282~287 in make_coco_data_17keypoints.py if needed |
|
- run the code to pre-process the dataset |
|
``` |
|
python make_coco_data_17keypoints.py |
|
``` |
|
``` |
|
Our data format: JSON file |
|
Keypoints order:['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', |
|
'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', |
|
'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', |
|
'right_ankle'] |
|
|
|
One item: |
|
[{"img_name": "0.jpg", |
|
"keypoints": [x0,y0,z0,x1,y1,z1,...], |
|
#z: 0 for no label, 1 for labeled but invisible, 2 for labeled and visible |
|
"center": [x,y], |
|
"bbox":[x0,y0,x1,y1], |
|
"other_centers": [[x0,y0],[x1,y1],...], |
|
"other_keypoints": [[[x0,y0],[x1,y1],...],[[x0,y0],[x1,y1],...],...], #lenth = num_keypoints |
|
}, |
|
... |
|
] |
|
``` |
|
|
|
|
|
|
|
|
|
### Test & Evaluation |
|
|
|
- Modify the DATASET_PATH in eval_onnx.py if needed |
|
- Test accuracy of the quantized model |
|
```python |
|
python eval_onnx.py --ipu --provider_config Path\To\vaip_config.json |
|
``` |
|
|
|
### Performance |
|
|
|
|Metric |Accuracy on IPU| |
|
| :----: | :----: | |
|
|accuracy | 79.745%| |
|
|
|
|
|
## Citation |
|
1.[model card](https://storage.googleapis.com/movenet/MoveNet.SinglePose%20Model%20Card.pdf) |
|
2.[movenet.pytorch](https://github.com/fire717/movenet.pytorch/blob/master/README.md?plain=1) |