nielsr HF Staff commited on
Commit
198a167
·
verified ·
1 Parent(s): 2316725

Improve dataset card: Add description, links, sample usage, and refine metadata

Browse files

This PR significantly enhances the dataset card for SocialNav-SUB by:
- Adding a descriptive introduction based on the paper's abstract.
- Including direct links to the paper ([https://huggingface.co/papers/2509.08757](https://huggingface.co/papers/2509.08757)), the project page, and the GitHub repository.
- Incorporating a "Getting Started" section with code snippets from the GitHub README for installing dependencies, downloading the dataset, benchmarking a VLM, and viewing results.
- Updating `task_categories` to `question-answering`, `video-text-to-text`, and `robotics`.
- Updating `tags` by removing `robotics` and adding `vqa` for better categorization and searchability.

Files changed (1) hide show
  1. README.md +51 -6
README.md CHANGED
@@ -1,13 +1,58 @@
1
  ---
 
 
2
  license: mit
 
 
3
  task_categories:
4
  - question-answering
5
- language:
6
- - en
7
  tags:
8
  - vlm
9
- - robotics
10
  - navigation
11
- size_categories:
12
- - 1K<n<10K
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: mit
5
+ size_categories:
6
+ - 1K<n<10K
7
  task_categories:
8
  - question-answering
9
+ - video-text-to-text
10
+ - robotics
11
  tags:
12
  - vlm
 
13
  - navigation
14
+ - vqa
15
+ ---
16
+
17
+ # SocialNav-SUB: Benchmarking VLMs for Scene Understanding in Social Robot Navigation
18
+
19
+ The Social Navigation Scene Understanding Benchmark (SocialNav-SUB) is a Visual Question Answering (VQA) dataset and benchmark designed to evaluate Vision-Language Models (VLMs) for scene understanding in real-world social robot navigation scenarios. SocialNav-SUB provides a unified framework for evaluating VLMs against human and rule-based baselines across VQA tasks requiring spatial, spatiotemporal, and social reasoning in social robot navigation. It aims to identify critical gaps in the social scene understanding capabilities of current VLMs, setting the stage for further research in foundation models for social robot navigation.
20
+
21
+ - Paper: [SocialNav-SUB: Benchmarking VLMs for Scene Understanding in Social Robot Navigation](https://huggingface.co/papers/2509.08757)
22
+ - Project Page: [https://larg.github.io/socialnav-sub](https://larg.github.io/socialnav-sub)
23
+ - Code: [https://github.com/michaelmunje/SocialNavSUB](https://github.com/michaelmunje/SocialNavSUB)
24
+
25
+ ## Getting Started
26
+
27
+ To get started with SocialNav-SUB, you can follow these steps:
28
+
29
+ 1. **Install Dependencies**
30
+
31
+ ```bash
32
+ pip install -r requirements.txt
33
+ ```
34
+
35
+ 2. **Download the Dataset**
36
+
37
+ Please download our dataset from [HuggingFace](https://huggingface.co/datasets/michaelmunje/SocialNav-SUB) by running the `download_dataset.sh` script:
38
+ ```bash
39
+ ./download_dataset.sh
40
+ ```
41
+
42
+ 3. **Benchmark a VLM**
43
+
44
+ Make a config file and specify the VLM under the `baseline_model` parameter and parameters for the experiments (such as prompt representation). API models require an environment variable containing an API key (`GOOGLE_API_KEY` or `OPENAI_API_KEY`).
45
+
46
+ ```bash
47
+ python socialnavsub/evaluate_vlm.py --cfg_path <cfg_path>
48
+ ```
49
+
50
+ 4. **View Results**
51
+
52
+ Results will be saved in the directory specified in the config file under the `evaluation_folder` entry. To postprocess the results, please run:
53
+
54
+ ```bash
55
+ python socialnavsub/postprocess_results.py --cfg_path <cfg_path>
56
+ ```
57
+
58
+ The results will be viewable in the csv whose filepath is specified in the `postprocessed_results_csv` entry in the config file (by default, `postprocessed_results.csv`).