Update README.md
Browse files
README.md
CHANGED
|
@@ -1,13 +1,3 @@
|
|
| 1 |
-
This is the accompying dataset for the Social Navigation Scene Understanding Benchmark [(SocialNav-SUB)](https://larg.github.io/socialnav-sub).
|
| 2 |
-
|
| 3 |
-
This VQA dataset provides:
|
| 4 |
-
1) A set of VQA prompts for scene understanding of social navigation scenarios
|
| 5 |
-
2) Additional data for prompts including odometry information and 3D human tracking estimations.
|
| 6 |
-
3) A set of multiple human labels for each VQA prompt (multiple human answers per prompt)
|
| 7 |
-
|
| 8 |
-
For more information please see our [project website](https://larg.github.io/socialnav-sub) or [SocialNav-SUB's github repo](https://github.com/LARG/SocialNavSUB).
|
| 9 |
-
|
| 10 |
-
|
| 11 |
---
|
| 12 |
language:
|
| 13 |
- en
|
|
@@ -26,43 +16,15 @@ tags:
|
|
| 26 |
|
| 27 |
# SocialNav-SUB: Benchmarking VLMs for Scene Understanding in Social Robot Navigation
|
| 28 |
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
- Paper: [SocialNav-SUB: Benchmarking VLMs for Scene Understanding in Social Robot Navigation](https://huggingface.co/papers/2509.08757)
|
| 32 |
-
- Project Page: [https://larg.github.io/socialnav-sub](https://larg.github.io/socialnav-sub)
|
| 33 |
-
- Code: [https://github.com/michaelmunje/SocialNavSUB](https://github.com/michaelmunje/SocialNavSUB)
|
| 34 |
-
|
| 35 |
-
## Getting Started
|
| 36 |
-
|
| 37 |
-
To get started with SocialNav-SUB, you can follow these steps:
|
| 38 |
-
|
| 39 |
-
1. **Install Dependencies**
|
| 40 |
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
2. **Download the Dataset**
|
| 46 |
-
|
| 47 |
-
Please download our dataset from [HuggingFace](https://huggingface.co/datasets/michaelmunje/SocialNav-SUB) by running the `download_dataset.sh` script:
|
| 48 |
-
```bash
|
| 49 |
-
./download_dataset.sh
|
| 50 |
-
```
|
| 51 |
-
|
| 52 |
-
3. **Benchmark a VLM**
|
| 53 |
-
|
| 54 |
-
Make a config file and specify the VLM under the `baseline_model` parameter and parameters for the experiments (such as prompt representation). API models require an environment variable containing an API key (`GOOGLE_API_KEY` or `OPENAI_API_KEY`).
|
| 55 |
-
|
| 56 |
-
```bash
|
| 57 |
-
python socialnavsub/evaluate_vlm.py --cfg_path <cfg_path>
|
| 58 |
-
```
|
| 59 |
-
|
| 60 |
-
4. **View Results**
|
| 61 |
-
|
| 62 |
-
Results will be saved in the directory specified in the config file under the `evaluation_folder` entry. To postprocess the results, please run:
|
| 63 |
|
| 64 |
-
|
| 65 |
-
python socialnavsub/postprocess_results.py --cfg_path <cfg_path>
|
| 66 |
-
```
|
| 67 |
|
| 68 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
- en
|
|
|
|
| 16 |
|
| 17 |
# SocialNav-SUB: Benchmarking VLMs for Scene Understanding in Social Robot Navigation
|
| 18 |
|
| 19 |
+
This is the accompying dataset for the Social Navigation Scene Understanding Benchmark (SocialNav-SUB) which is a Visual Question Answering (VQA) dataset and benchmark designed to evaluate Vision-Language Models (VLMs) for scene understanding in real-world social robot navigation scenarios. SocialNav-SUB provides a unified framework for evaluating VLMs against human and rule-based baselines across VQA tasks requiring spatial, spatiotemporal, and social reasoning in social robot navigation. It aims to identify critical gaps in the social scene understanding capabilities of current VLMs, setting the stage for further research in foundation models for social robot navigation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
+
This VQA dataset provides:
|
| 22 |
+
1) A set of VQA prompts for scene understanding of social navigation scenarios
|
| 23 |
+
2) Additional data for prompts including odometry information and 3D human tracking estimations.
|
| 24 |
+
3) A set of multiple human labels for each VQA prompt (multiple human answers per prompt)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
+
For more information, please see the following:
|
|
|
|
|
|
|
| 27 |
|
| 28 |
+
- Paper: [SocialNav-SUB: Benchmarking VLMs for Scene Understanding in Social Robot Navigation](https://arxiv.org/abs/2509.08757)
|
| 29 |
+
- Project Page: [https://larg.github.io/socialnav-sub](https://larg.github.io/socialnav-sub)
|
| 30 |
+
- Code: [https://github.com/michaelmunje/SocialNavSUB](https://github.com/LARG/SocialNavSUB)
|