Datasets:
Size:
100K<n<1M
ArXiv:
Tags:
egocentric-video
mistake-detection
temporal-localization
video-language-grounding
hand-object-interaction
action-recognition
License:
Yayuan Li commited on
Commit ·
e2b50a5
1
Parent(s): 4e57a65
formatting
Browse files- .gitignore +1 -0
- README.md +12 -17
.gitignore
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
CLAUDE.local.md
|
README.md
CHANGED
|
@@ -22,13 +22,21 @@ size_categories:
|
|
| 22 |
- 100K<n<1M
|
| 23 |
---
|
| 24 |
|
| 25 |
-
#
|
| 26 |
|
| 27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
MATT-Bench provides two large-scale benchmarks for **Mistake Attribution (MATT)** — a task that goes beyond binary mistake detection to attribute *what* semantic role was violated, *when* the mistake became irreversible (Point-of-No-Return), and *where* the mistake occurred in the frame.
|
| 34 |
|
|
@@ -49,19 +57,6 @@ Each sample consists of an instruction text and an attempt video, annotated with
|
|
| 49 |
- **Temporal Attribution**: The Point-of-No-Return (PNR) frame where the mistake becomes irreversible (Ego4D-M)
|
| 50 |
- **Spatial Attribution**: Bounding box localizing the mistake region in the PNR frame (Ego4D-M)
|
| 51 |
|
| 52 |
-
## Links
|
| 53 |
-
|
| 54 |
-
- [Paper (arXiv)](https://arxiv.org/abs/2511.20525)
|
| 55 |
-
- [Code (GitHub)](https://github.com/yayuanli/MATT)
|
| 56 |
-
- [Project Page](https://yayuanli.github.io/MATT/)
|
| 57 |
-
|
| 58 |
-
## Authors
|
| 59 |
-
|
| 60 |
-
- [Yayuan Li](https://www.linkedin.com/in/yayuan-li-148659272/) — University of Michigan
|
| 61 |
-
- [Aadit Jain](https://www.linkedin.com/in/jain-aadit/) — University of Michigan
|
| 62 |
-
- [Filippos Bellos](https://www.linkedin.com/in/filippos-bellos-168595156/) — University of Michigan
|
| 63 |
-
- [Jason J. Corso](https://www.linkedin.com/in/jason-corso/) — University of Michigan, Voxel51
|
| 64 |
-
|
| 65 |
## Citation
|
| 66 |
|
| 67 |
```bibtex
|
|
|
|
| 22 |
- 100K<n<1M
|
| 23 |
---
|
| 24 |
|
| 25 |
+
# Mistake Attribution: Fine-Grained Mistake Understanding in Egocentric Videos
|
| 26 |
|
| 27 |
+
**CVPR 2026**
|
| 28 |
+
|
| 29 |
+
[Yayuan Li](https://www.linkedin.com/in/yayuan-li-148659272/)<sup>1</sup>, [Aadit Jain](https://www.linkedin.com/in/jain-aadit/)<sup>1</sup>, [Filippos Bellos](https://www.linkedin.com/in/filippos-bellos-168595156/)<sup>1</sup>, [Jason J. Corso](https://www.linkedin.com/in/jason-corso/)<sup>1,2</sup>
|
| 30 |
+
|
| 31 |
+
<sup>1</sup>University of Michigan, <sup>2</sup>Voxel51
|
| 32 |
|
| 33 |
+
[[Paper](https://arxiv.org/abs/2511.20525)] [[Code](https://github.com/yayuanli/MATT)] [[Project Page](https://yayuanli.github.io/MATT/)]
|
| 34 |
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
> **Dataset coming soon.** We are preparing the data for public release. Stay tuned!
|
| 38 |
+
|
| 39 |
+
## MATT-Bench Overview
|
| 40 |
|
| 41 |
MATT-Bench provides two large-scale benchmarks for **Mistake Attribution (MATT)** — a task that goes beyond binary mistake detection to attribute *what* semantic role was violated, *when* the mistake became irreversible (Point-of-No-Return), and *where* the mistake occurred in the frame.
|
| 42 |
|
|
|
|
| 57 |
- **Temporal Attribution**: The Point-of-No-Return (PNR) frame where the mistake becomes irreversible (Ego4D-M)
|
| 58 |
- **Spatial Attribution**: Bounding box localizing the mistake region in the PNR frame (Ego4D-M)
|
| 59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
## Citation
|
| 61 |
|
| 62 |
```bibtex
|