Add link to paper and code repository (#1)
Browse files- Add link to paper and code repository (af9ba63d130c7608cd459083966115c7b86c6352)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -2,13 +2,14 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
task_categories:
|
| 4 |
- robotics
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
# MemoryBench Dataset
|
| 8 |
|
| 9 |
-
MemoryBench is a benchmark dataset designed to evaluate spatial memory and action recall in robotic manipulation. This dataset accompanies the **SAM2Act+** framework, introduced in the paper *SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation*. For detailed task descriptions and more information about this paper, please visit SAM2Act's [website](https://sam2act.github.io).
|
| 10 |
|
| 11 |
-
The dataset contains scripted demonstrations for three memory-dependent tasks designed in RLBench (same version as the one used in [PerAct](https://peract.github.io/)):
|
| 12 |
|
| 13 |
- **Reopen Drawer**: Tests 3D spatial memory along the z-axis.
|
| 14 |
- **Put Block Back**: Evaluates 2D spatial memory along the x-y plane.
|
|
@@ -42,12 +43,12 @@ If you use this dataset, please cite the SAM2Act paper:
|
|
| 42 |
|
| 43 |
```bibtex
|
| 44 |
@misc{fang2025sam2act,
|
| 45 |
-
title={SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation},
|
| 46 |
author={Haoquan Fang and Markus Grotz and Wilbert Pumacay and Yi Ru Wang and Dieter Fox and Ranjay Krishna and Jiafei Duan},
|
| 47 |
year={2025},
|
| 48 |
eprint={2501.18564},
|
| 49 |
archivePrefix={arXiv},
|
| 50 |
primaryClass={cs.RO},
|
| 51 |
-
url={https://arxiv.org/abs/2501.18564},
|
| 52 |
}
|
| 53 |
```
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
task_categories:
|
| 4 |
- robotics
|
| 5 |
+
library_name: robotics
|
| 6 |
---
|
| 7 |
|
| 8 |
# MemoryBench Dataset
|
| 9 |
|
| 10 |
+
MemoryBench is a benchmark dataset designed to evaluate spatial memory and action recall in robotic manipulation. This dataset accompanies the **SAM2Act+** framework, introduced in the paper *[SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation](https://huggingface.co/papers/2501.18564)*. For detailed task descriptions and more information about this paper, please visit SAM2Act's [website](https://sam2act.github.io). Code can be found at [https://github.com/sam2act/sam2act](https://github.com/sam2act/sam2act).
|
| 11 |
|
| 12 |
+
The dataset contains scripted demonstrations for three memory-dependent tasks designed in RLBench (same version as the one used in [PerAct](https://peract.github.io/)):
|
| 13 |
|
| 14 |
- **Reopen Drawer**: Tests 3D spatial memory along the z-axis.
|
| 15 |
- **Put Block Back**: Evaluates 2D spatial memory along the x-y plane.
|
|
|
|
| 43 |
|
| 44 |
```bibtex
|
| 45 |
@misc{fang2025sam2act,
|
| 46 |
+
title={SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation},
|
| 47 |
author={Haoquan Fang and Markus Grotz and Wilbert Pumacay and Yi Ru Wang and Dieter Fox and Ranjay Krishna and Jiafei Duan},
|
| 48 |
year={2025},
|
| 49 |
eprint={2501.18564},
|
| 50 |
archivePrefix={arXiv},
|
| 51 |
primaryClass={cs.RO},
|
| 52 |
+
url={https://arxiv.org/abs/2501.18564},
|
| 53 |
}
|
| 54 |
```
|