nielsr HF Staff commited on
Commit
ab397c3
·
verified ·
1 Parent(s): 0bf299d

Update dataset card with paper, project, and GitHub links

Browse files

Hi! I'm Niels from the Hugging Face community science team. I've updated the dataset card for PushUpBench to include links to the associated research paper, the project website, and the official `lmms-eval` GitHub repository. I also updated the description to better reflect the benchmark's scope as described in the paper abstract.

Files changed (1) hide show
  1. README.md +39 -35
README.md CHANGED
@@ -1,49 +1,51 @@
1
  ---
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
- - name: id
 
 
 
 
 
 
 
5
  dtype: int64
6
- - name: name
7
- dtype: string
8
- - name: video_path
9
- dtype: string
10
- - name: count
11
- sequence:
12
- dtype: int64
13
- - name: fuzzy_action
14
- dtype: bool
15
- - name: complex_action
16
- dtype: bool
17
  splits:
18
- - name: test
19
- num_examples: 227
20
  configs:
21
- - config_name: default
22
- data_files:
23
- - split: test
24
- path: data/test.jsonl
25
- license: cc-by-4.0
26
- task_categories:
27
- - video-classification
28
- - visual-question-answering
29
  tags:
30
- - video
31
- - counting
32
- - repetition-counting
33
- - exercise
34
- - benchmark
35
- pretty_name: PushUpBench
36
- size_categories:
37
- - n<1K
38
  ---
39
 
40
  # PushUpBench: Video Repetition Counting Benchmark
41
 
42
- PushUpBench is a benchmark for evaluating vision-language models on their ability to count exercise repetitions in videos.
 
 
43
 
44
  ## Dataset Description
45
 
46
- - **Total samples**: 227
47
  - **Video format**: MP4
48
  - **Task**: Count the number of repetitions of a specified exercise in a video
49
 
@@ -58,6 +60,8 @@ Each sample contains:
58
 
59
  ## Usage with lmms-eval
60
 
 
 
61
  ```bash
62
  # Set the video directory
63
  export PUSHUPBENCH_VIDEO_DIR=/path/to/videos
@@ -72,6 +76,6 @@ python -m lmms_eval \
72
 
73
  ## Metrics
74
 
75
- - **Exact Match**: Prediction matches any value in the ground truth count list
76
- - **MAE**: Mean Absolute Error between prediction and primary ground truth
77
- - **OBO**: Off-By-One accuracy (prediction within 1 of any ground truth)
 
1
  ---
2
+ license: cc-by-4.0
3
+ size_categories:
4
+ - n<1K
5
+ task_categories:
6
+ - video-classification
7
+ - visual-question-answering
8
+ pretty_name: PushUpBench
9
  dataset_info:
10
  features:
11
+ - name: id
12
+ dtype: int64
13
+ - name: name
14
+ dtype: string
15
+ - name: video_path
16
+ dtype: string
17
+ - name: count
18
+ sequence:
19
  dtype: int64
20
+ - name: fuzzy_action
21
+ dtype: bool
22
+ - name: complex_action
23
+ dtype: bool
 
 
 
 
 
 
 
24
  splits:
25
+ - name: test
26
+ num_examples: 227
27
  configs:
28
+ - config_name: default
29
+ data_files:
30
+ - split: test
31
+ path: data/test.jsonl
 
 
 
 
32
  tags:
33
+ - video
34
+ - counting
35
+ - repetition-counting
36
+ - exercise
37
+ - benchmark
 
 
 
38
  ---
39
 
40
  # PushUpBench: Video Repetition Counting Benchmark
41
 
42
+ [**Project Page**](https://pushupbench.com) | [**Paper**](https://huggingface.co/papers/2604.23407) | [**GitHub**](https://github.com/EvolvingLMMs-Lab/lmms-eval)
43
+
44
+ PushUpBench is a benchmark for evaluating vision-language models (VLMs) on their ability to count exercise repetitions in videos. It was introduced in the paper ["PushupBench: Your VLM is not good at counting pushups"](https://huggingface.co/papers/2604.23407). The dataset consists of 446 long-form clips (averaging 36.7s) designed to test temporal reasoning and repetition counting beyond simple pattern recognition.
45
 
46
  ## Dataset Description
47
 
48
+ - **Total samples**: 446 clips (227 in the test split)
49
  - **Video format**: MP4
50
  - **Task**: Count the number of repetitions of a specified exercise in a video
51
 
 
60
 
61
  ## Usage with lmms-eval
62
 
63
+ PushUpBench is incorporated in the [`lmms-eval`](https://github.com/EvolvingLMMs-Lab/lmms-eval) toolkit.
64
+
65
  ```bash
66
  # Set the video directory
67
  export PUSHUPBENCH_VIDEO_DIR=/path/to/videos
 
76
 
77
  ## Metrics
78
 
79
+ - **Exact Match**: Prediction matches any value in the ground truth count list.
80
+ - **MAE**: Mean Absolute Error between prediction and primary ground truth.
81
+ - **OBO**: Off-By-One accuracy (prediction within 1 of any ground truth).