Spaces:
Running
Running
title: Mot Metrics | |
emoji: π | |
colorFrom: gray | |
colorTo: green | |
tags: | |
- evaluate | |
- metric | |
description: "TODO: add a description here" | |
sdk: gradio | |
sdk_version: 3.19.1 | |
app_file: app.py | |
pinned: false | |
# How to Use | |
The MOT metrics takes two numeric arrays as input corresponding to the predictions and references bounding boxes: | |
```python | |
>>> import numpy as np | |
>>> module = evaluate.load("SEA-AI/mot-metrics") | |
>>> predicted =[[1,1,10,20,30,40,0.85],[2,1,15,25,35,45,0.78],[2,2,55,65,75,85,0.95]] | |
>>> ground_truth = [[1, 1, 10, 20, 30, 40],[2, 1, 15, 25, 35, 45]] | |
>>> results = module._compute(predictions=predicted, references=ground_truth, max_iou=0.5) | |
>>> results | |
{'idf1': 0.8421052631578947, 'idp': 0.8888888888888888, | |
'idr': 0.8, 'recall': 0.8, 'precision': 0.8888888888888888, | |
'num_unique_objects': 3,'mostly_tracked': 2, | |
'partially_tracked': 1, 'mostly_lost': 0, | |
'num_false_positives': 1, 'num_misses': 2, | |
'num_switches': 0, 'num_fragmentations': 0, | |
'mota': 0.7, 'motp': 0.02981870229007634, | |
'num_transfer': 0, 'num_ascend': 0, | |
'num_migrate': 0} | |
``` | |
## Input | |
Each line of the **predictions** array is a list with the following format: | |
``` | |
[frame ID, object ID, x, y, width, height, confidence] | |
``` | |
Each line of the **references** array is a list with the following format: | |
``` | |
[frame ID, object ID, x, y, width, height] | |
``` | |
The `max_iou` parameter is used to filter out the bounding boxes with IOU less than the threshold. The default value is 0.5. This means that if a ground truth and a predicted bounding boxes IoU value is less than 0.5, then the predicted bounding box is not considered for association. | |
## Output | |
The output is a dictionary containing the following metrics: | |
| Name | Description | | |
| :------------------- | :--------------------------------------------------------------------------------- | | |
| idf1 | ID measures: global min-cost F1 score. | | |
| idp | ID measures: global min-cost precision. | | |
| idr | ID measures: global min-cost recall. | | |
| recall | Number of detections over number of objects. | | |
| precision | Number of detected objects over sum of detected and false positives. | | |
| num_unique_objects | Total number of unique object ids encountered. | | |
| mostly_tracked | Number of objects tracked for at least 80 percent of lifespan. | | |
| partially_tracked | Number of objects tracked between 20 and 80 percent of lifespan. | | |
| mostly_lost | Number of objects tracked less than 20 percent of lifespan. | | |
| num_false_positives | Total number of false positives (false-alarms). | | |
| num_misses | Total number of misses. | | |
| num_switches | Total number of track switches. | | |
| num_fragmentations | Total number of switches from tracked to not tracked. | | |
| mota | Multiple object tracker accuracy. | | |
| motp | Multiple object tracker precision. | | |
## Citations | |
```bibtex | |
@InProceedings{huggingface:module, | |
title = {A great new module}, | |
authors={huggingface, Inc.}, | |
year={2020}} | |
``` | |
```bibtex | |
@article{milan2016mot16, | |
title={MOT16: A benchmark for multi-object tracking}, | |
author={Milan, Anton and Leal-Taix{\'e}, Laura and Reid, Ian and Roth, Stefan and Schindler, Konrad}, | |
journal={arXiv preprint arXiv:1603.00831}, | |
year={2016}} | |
``` | |
## Further References | |
- [Github Repository - py-motmetrics](https://github.com/cheind/py-motmetrics/tree/develop) | |