mwmathis commited on
Commit
673140e
1 Parent(s): 5346d30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -4
README.md CHANGED
@@ -8,11 +8,83 @@ tags:
8
  Copyright 2021-2023 by Mackenzie Mathis, Alexander Mathis, Shaokai Ye and contributors. All rights reserved.
9
 
10
 
11
- - Please cite Ye et al if you use this model in your work https://arxiv.org/abs/2203.07436v1
12
  - If this license is not suitable for your business or project
13
  please contact EPFL-TTO (https://tto.epfl.ch/) for a full commercial license.
14
- - This software may not be used to harm any animal deliberately.
15
 
16
- Model description:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
- The model is described in Ye et al. 2023.
 
8
  Copyright 2021-2023 by Mackenzie Mathis, Alexander Mathis, Shaokai Ye and contributors. All rights reserved.
9
 
10
 
11
+ - Please cite **Ye et al 2023** if you use this model in your work https://arxiv.org/abs/2203.07436v1
12
  - If this license is not suitable for your business or project
13
  please contact EPFL-TTO (https://tto.epfl.ch/) for a full commercial license.
 
14
 
15
+ This software may not be used to harm any animal deliberately!
16
+
17
+
18
+ **MODEL CARD:**
19
+
20
+ This model was trained a dataset called "Quadrupred-40K." It was trained in Tensorflow 2 within the [DeepLabCut framework](www.deeplabcut.org).
21
+ Full training details can be found in Ye et al. 2023, but in brief, this was trained with **DLCRNet** as introduced in [Lauer et al 2022 Nature Methods](https://www.nature.com/articles/s41592-022-01443-0).
22
+ You can use this model simply with our light-weight loading package called [DLCLibrary](https://github.com/DeepLabCut/DLClibrary). Here is an example useage:
23
+
24
+ ```python
25
+ from pathlib import Path
26
+ from dlclibrary import download_huggingface_model
27
+
28
+ # Creates a folder and downloads the model to it
29
+ model_dir = Path("./superanimal_quadruped_model")
30
+ model_dir.mkdir()
31
+ download_huggingface_model("superanimal_quadruped", model_dir)
32
+ ```
33
+
34
+ **Training Data:**
35
+
36
+ It consists of being trained together on the following datasets:
37
+
38
+ - **AwA-Pose** Quadruped dataset, see full details at (9).
39
+ - **AnimalPose** See full details at (10).
40
+ - **AcinoSet** See full details at (11).
41
+ - **Horse-30** Horse-30 dataset, benchmark task is called Horse-10; See full details at (12).
42
+ - **StanfordDogs** See full details at (13, 14).
43
+ - **AP-10K** See full details at (15).
44
+ - **iRodent** We utilized the iNaturalist API functions for scraping observations
45
+ with the taxon ID of Suborder Myomorpha (16). The functions allowed us to filter the large amount of observations down to the
46
+ ones with photos under the CC BY-NC creative license. The most common types of rodents from the collected observations are
47
+ Muskrat (Ondatra zibethicus), Brown Rat (Rattus norvegicus), House Mouse (Mus musculus), Black Rat (Rattus rattus), Hispid
48
+ Cotton Rat (Sigmodon hispidus), Meadow Vole (Microtus pennsylvanicus), Bank Vole (Clethrionomys glareolus), Deer Mouse
49
+ (Peromyscus maniculatus), White-footed Mouse (Peromyscus leucopus), Striped Field Mouse (Apodemus agrarius). We then
50
+ generated segmentation masks over target animals in the data by processing the media through an algorithm we designed that
51
+ uses a Mask Region Based Convolutional Neural Networks(Mask R-CNN) (17) model with a ResNet-50-FPN backbone (18),
52
+ pretrained on the COCO datasets (19). The processed 443 images were then manually labeled with both pose annotations and
53
+ segmentation masks.
54
+
55
+ Here is an image with the keypoint guide, the distribution of images per dataset, and examples from the datasets inferenced with a model trained with less data for benchmarking as in Ye et al 2023.
56
+ Thereby note that performance of this model we are releasing has comporable or higher performance.
57
+
58
+ Please note that each dataest was labeled by separate labs & seperate individuals, therefore while we map names
59
+ to a unified pose vocabulary, there will be annotator bias in keypoint placement (See Ye et al. 2023 for our Supplementary Note on annotator bias).
60
+ You will also note the dataset is highly diverse across species, but collectively has more representation of domesticated animals like dogs, cats, horses, and cattle.
61
+ We recommend if performance is not as good as you need it to be, first try video adaptation (see Ye et al. 2023),
62
+ or fine-tune these weights with your own labeling.
63
+
64
+ <p align="center">
65
+ <img src="https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1690988780004-AG00N6OU1R21MZ0AU9RE/modelcard-SAQ.png?format=1500w" width="95%">
66
+ </p>
67
+
68
+
69
+ 9. Prianka Banik, Lin Li, and Xishuang Dong. A novel dataset for keypoint detection of quadruped animals from images. ArXiv, abs/2108.13958, 2021
70
+ 10. Jinkun Cao, Hongyang Tang, Haoshu Fang, Xiaoyong Shen, Cewu Lu, and Yu-Wing Tai. Cross-domain adaptation for animal pose estimation.
71
+ 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9497–9506, 2019.
72
+ 11. Daniel Joska, Liam Clark, Naoya Muramatsu, Ricardo Jericevich, Fred Nicolls, Alexander Mathis, Mackenzie W. Mathis, and Amir Patel. Acinoset:
73
+ A 3d pose estimation dataset and baseline models for cheetahs in the wild. 2021 IEEE International Conference on Robotics and Automation
74
+ (ICRA), pages 13901–13908, 2021.
75
+ 12. Alexander Mathis, Thomas Biasi, Steffen Schneider, Mert Yuksekgonul, Byron Rogers, Matthias Bethge, and Mackenzie W Mathis. Pretraining
76
+ boosts out-of-domain robustness for pose estimation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision,
77
+ pages 1859–1868, 2021.
78
+ 13. Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop
79
+ on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011.
80
+ 14. Benjamin Biggs, Thomas Roddick, Andrew Fitzgibbon, and Roberto Cipolla. Creatures great and smal: Recovering the shape and motion of
81
+ animals from video. In Asian Conference on Computer Vision, pages 3–19. Springer, 2018.
82
+ 15. Hang Yu, Yufei Xu, Jing Zhang, Wei Zhao, Ziyu Guan, and Dacheng Tao. Ap-10k: A benchmark for animal pose estimation in the wild. In Thirty-fifth
83
+ Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
84
+ 16. iNaturalist. OGBIF Occurrence Download. https://doi.org/10.15468/dl.p7nbxt. iNaturalist, July 2020
85
+ 17. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer
86
+ vision, pages 2961–2969, 2017.
87
+ 18. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection, 2016.
88
+ 19. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll’ar,
89
+ and C. Lawrence Zitnick. Microsoft COCO: common objects in context. CoRR, abs/1405.0312, 2014
90