license: apache-2.0
tags:
- computer_vision
- pose_estimation
Copyright 2021-2023 by Mackenzie Mathis, Alexander Mathis, Shaokai Ye and contributors. All rights reserved.
- Please cite Ye et al 2023 if you use this model in your work https://arxiv.org/abs/2203.07436v1
- If this license is not suitable for your business or project please contact EPFL-TTO (https://tto.epfl.ch/) for a full commercial license.
This software may not be used to harm any animal deliberately!
MODEL CARD:
This model was trained a dataset called "TopViewMouse-5K." It was trained in Tensorflow 2 within the DeepLabCut framework. Full training details can be found in Ye et al. 2023. You can use this model simply with our light-weight loading package called DLCLibrary. Here is an example useage:
from pathlib import Path
from dlclibrary import download_huggingface_model
# Creates a folder and downloads the model to it
model_dir = Path("./superanimal_topviewmouse_model")
model_dir.mkdir()
download_huggingface_model("superanimal_topviewmouse", model_dir)
Training Data:
It consists of being trained together on the following datasets:
3CSI, BM, EPM, LDB, OFT See full details at (1) and in (2).
BlackMice See full details at (3).
WhiteMice Courtesy of Prof. Sam Golden and Nastacia Goodwin. See details in SIMBA (4). TriMouse See full details at (5).
DLC-Openfield See full details at (6).
Kiehn-Lab-Openfield, Swimming, and treadmill Courtesy of Prof. Ole Kiehn, Dr. Jared Cregg, and Prof. Carmelo Bellardita; see details at (7).
MausHaus We collected video data from five single-housed C57BL/6J male and female mice in an extended home cage, carried out in the laboratory of Mackenzie Mathis at Harvard University and also EPFL (temperature of housing was 20-25C, humidity 20-50%). Data were recorded at 30Hz with 640 × 480 pixels resolution acquired with White Matter, LLC eV cameras. Annotators localized 26 keypoints across 322 frames sampled from within DeepLabCut using the k-means clustering approach (8). All experimental procedures for mice were in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and approved by the Harvard Institutional Animal Care and Use Committee (IACUC) (n=1 mouse), and by the Veterinary Office of the Canton of Geneva (Switzerland; license GE01) (n=4 mice).
Here is an image with examples from the datasets, the distribution of images per dataset, and the keypoint guide.
Please note that each dataest was labeled by separate labs, seperate individuals, therefore while we map names to a unified pose vocabulary, there will be annotator bias in keypoint placement (See Ye et al. 2023 for our Supplementary Note on annotator bias). You will also note the dataset is primarily using C56Blk6/J mice and only some CD1 examples. We recommend if performance is not as good as you need it to be, first try video adaptation (see Ye et al. 2023), or fine-tune these weights with your own labeling.
- Oliver Sturman, Lukas von Ziegler, Christa Schläppi, Furkan Akyol, Mattia Privitera, Daria Slominski, Christina Grimm, Laetitia Thieren, Valerio Zerbi, Benjamin Grewe, et al. Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions. Neuropsychopharmacology, 45(11):1942–1952, 2020.
- Lukas von Ziegler, Oliver Sturman, and Johannes Bohacek. Videos for deeplabcut, noldus ethovision X14 and TSE multi conditioning systems comparisons. https://doi.org/10.5281/zenodo.3608658. Zenodo, January 2020.
- Isaac Chang. Trained DeepLabCut model for tracking mouse in open field arena with topdown view. https://doi.org/10.5281/zenodo.3955216. Zenodo, July 2020.
- Simon RO Nilsson, Nastacia L. Goodwin, Jia Jie Choong, Sophia Hwang, Hayden R Wright, Zane C Norville, Xiaoyu Tong, Dayu Lin, Bran- don S. Bentzley, Neir Eshel, Ryan J McLaughlin, and Sam A. Golden. Simple behavioral analysis (simba) – an open source toolkit for computer classification of complex social behaviors in experimental animals. bioRxiv, 2020.
- Jessy Lauer, Mu Zhou, Shaokai Ye, William Menegas, Steffen Schneider, Tanmay Nath, Mohammed Mostafizur Rahman, Valentina Di Santo, Daniel Soberanes, Guoping Feng, Venkatesh N. Murthy, George Lauder, Catherine Dulac, Mackenzie W. Mathis, and Alexander Mathis. Multi- animal pose estimation, identification and tracking with deeplabcut. Nature Methods, 19:496 – 504, 2022.
- Alexander Mathis, Pranav Mamidanna, Kevin M Cury, Taiga Abe, Venkatesh N Murthy, Mackenzie Weygandt Mathis, and Matthias Bethge. Deeplab- cut: markerless pose estimation of user-defined body parts with deep learning. Nature neuroscience, 21:1281–1289, 2018.
- Jared M. Cregg, Roberto Leiras, Alexia Montalant, Paulina Wanken, Ian R. Wickersham, and Ole Kiehn. Brainstem neurons that command mammalian locomotor asymmetries. Nature neuroscience, 23:730 – 740, 2020