simplexsigil2 commited on
Commit
38f9a79
·
verified ·
1 Parent(s): 3d97c3d

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +159 -359
README.md CHANGED
@@ -71,455 +71,255 @@ configs:
71
 
72
  # WanFall: A Synthetic Activity Recognition Dataset
73
 
74
- This repository contains temporal segment annotations for WanFall, a synthetic activity recognition dataset focused on fall detection and related activities of daily living.
75
 
76
- **This dataset is currently under development and subject to change!**
77
-
78
- ## Overview
79
-
80
- WanFall is a large-scale synthetic dataset designed for activity recognition research, with emphasis on fall detection and posture transitions. The dataset features computer-generated videos of human actors performing various activities in controlled virtual environments.
81
-
82
- **Key Features:**
83
- - **~12,000 video clips** with dense temporal annotations
84
- - **16 activity classes** including falls, posture transitions, and static states
85
- - **5.0625 seconds** per video clip (81 frames @ 16 fps)
86
- - **Synthetic generation** enabling diverse scenarios and controlled variation
87
- - **Dense temporal segmentation** with frame-level precision
88
 
89
  ## Dataset Statistics
90
 
91
- - **Total videos**: 12,000
92
- - **Total temporal segments**: 19,228
93
- - **Annotation format**: Temporal segmentation (start/end timestamps) with rich metadata
94
- - **Video duration**: 5.0625 seconds per clip
95
- - **Frame count**: 81 frames per video
96
- - **Frame rate**: 16 fps
97
- - **Annotation formats**: Temporal segments (start/end times) OR frame-wise labels (81 per video)
98
- - **Split configurations**: 4 split configs + framewise support
99
- - `random`: 80/10/10 train/val/test split (seed 42) - 9,600/1,200/1,200 videos
100
- - `cross_age`: Cross-age evaluation - 4,000/2,000/6,000 videos
101
- - `cross_ethnicity`: Cross-ethnicity evaluation - 5,178/1,741/5,081 videos
102
- - `cross_bmi`: Cross-BMI evaluation - 6,066/2,962/2,972 videos
103
- - `framewise=True`: Add frame-wise labels (81 per video) to any split
104
- - **Metadata fields**: 12 demographic and scene attributes per video
105
-
106
- ## Activity Categories
107
-
108
- The dataset includes **16 activity classes** organized into dynamic actions and static states:
109
-
110
- ### Dynamic Actions (Transitions)
111
- - **0. walk** - Walking movement, including jogging and running
112
- - **1. fall** - Falling down action (from any previous state), beginning with the moment of lost control and ending with a resting state or activity change.
113
- - **2. fallen** - Person in fallen state (on ground after fall)
114
- - **3. sit_down** - Transitioning from standing to sitting
115
- - **4. sitting** - Stationary sitting posture
116
- - **5. lie_down** - Intentionally lying down (not falling)
117
- - **6. lying** - Stationary lying posture (after intentional lie_down)
118
- - **7. stand_up** Getting up, either from fallen or lying into sitting or into standing position (not only get up to standing)
119
- - **8. standing** - Stationary standing posture
120
- - **9. other** - Actions not fitting above categories
121
- - **10. kneel_down** - Transitioning to kneeling position
122
- - **11. kneeling** - Stationary kneeling posture
123
- - **12. squat_down** - Transitioning to squatting position
124
- - **13. squatting** - Stationary squatting posture
125
- - **14. crawl** - Crawling movement on hands and knees
126
- - **15. jump** - Jumping action
127
-
128
- ### Label Format
129
-
130
- The `labels/wanfall.csv` file contains temporal segments with rich metadata:
131
-
132
- ```csv
133
- path,label,start,end,subject,cam,dataset,age_group,gender_presentation,monk_skin_tone,race_ethnicity_omb,bmi_band,height_band,environment_category,camera_shot,speed,camera_elevation,camera_azimuth,camera_distance
134
- ```
135
-
136
- **Core Fields:**
137
- - `path`: Relative path to the video (without .mp4 extension, e.g., "fall/fall_ch_001")
138
- - `label`: Activity class ID (0-15)
139
- - `start`: Start time of the segment in seconds
140
- - `end`: End time of the segment in seconds
141
- - `subject`: Subject ID (`-1` for synthetic data)
142
- - `cam`: Camera view ID (`-1` for single view)
143
- - `dataset`: Dataset name (`wanfall`)
144
-
145
- **Demographic Metadata:**
146
- - `age_group`: One of 6 age categories
147
- - toddlers_1_4, children_5_12, teenagers_13_17, young_adults_18_34, middle_aged_35_64, elderly_65_plus
148
- - `gender_presentation`: Visual gender presentation (male, female)
149
- - `monk_skin_tone`: [Monk Skin Tone scale](https://skintone.google/the-scale) (mst1-mst10)
150
- - 10-point scale representing diverse skin tones from lightest to darkest
151
- - Developed by Dr. Ellis Monk for inclusive representation
152
- - `race_ethnicity_omb`: [OMB race/ethnicity categories](https://www.census.gov/newsroom/blogs/random-samplings/2024/04/updates-race-ethnicity-standards.html)
153
- - **white**: White/European American
154
- - **black**: Black/African American
155
- - **asian**: Asian
156
- - **hispanic_latino**: Hispanic/Latino
157
- - **aian**: American Indian and Alaska Native
158
- - **nhpi**: Native Hawaiian and Pacific Islander
159
- - **mena**: Middle Eastern and North African
160
- - `bmi_band`: Body type (underweight, normal, overweight, obese)
161
- - `height_band`: Height category (short, avg, tall)
162
-
163
- **Scene Metadata:**
164
- - `environment_category`: Scene location (indoor, outdoor)
165
- - `camera_shot`: Shot composition (static_wide, static_medium_wide)
166
- - `speed`: Frame rate (24fps_rt, 25fps_rt, 30fps_rt, std_rt)
167
- - `camera_elevation`: Camera height (eye, low, high, top)
168
- - `camera_azimuth`: Camera angle (front, rear, left, right)
169
- - `camera_distance`: Camera distance (medium, far)
170
-
171
- ### Split Format
172
-
173
- Split files in the `splits/` directory list the video paths included in each partition:
174
-
175
- ```
176
- path
177
- fall/fall_ch_001
178
- fall/fall_ch_002
179
- ...
180
- ```
181
-
182
- ## Usage
183
-
184
- The WanFall dataset provides a flexible Python API through the HuggingFace `datasets` library with multiple configurations and loading modes.
185
 
186
- ### Quick Start
187
 
188
  ```python
189
  from datasets import load_dataset
190
 
191
- # Load with random 80/10/10 split (temporal segments, default)
192
  dataset = load_dataset("simplexsigil2/wanfall", "random")
193
 
194
- print(f"Train: {len(dataset['train'])} segments")
195
- print(f"Validation: {len(dataset['validation'])} segments")
196
- print(f"Test: {len(dataset['test'])} segments")
197
 
198
- # Access example
199
- example = dataset['train'][0]
200
- print(f"Video: {example['path']}")
201
- print(f"Activity: {example['label']} ({example['start']:.2f}s - {example['end']:.2f}s)")
202
- print(f"Age group: {example['age_group']}")
203
  ```
204
 
205
- ### Dataset Configurations
206
 
207
- WanFall provides **7 configurations** for different use cases:
208
 
209
- **Key Distinction: Segment-Level vs Video-Level**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
210
 
211
- | Configuration | Sample Unit | Train Size | Has start/end? | Has frame_labels? |
212
- |--------------|-------------|------------|----------------|-------------------|
213
- | `random` | **Segment** | 15,344 segments | Yes | ❌ No |
214
- | `random` + `framewise=True` | **Video** | 9,600 videos | ❌ No | ✅ Yes (81 labels) |
215
- | `cross_age` | **Segment** | 6,267 segments | ✅ Yes | ❌ No |
216
- | `cross_age` + `framewise=True` | **Video** | 4,000 videos | ❌ No | ✅ Yes (81 labels) |
217
- | `labels` | **Segment** | 19,228 segments | ✅ Yes | ❌ No |
218
- | `framewise` | **Video** | 12,000 videos | ❌ No | ✅ Yes (81 labels) |
219
 
220
- #### 1. **Temporal Segments** (Default)
221
 
222
- Load temporal segment annotations where **each sample is a segment** with start/end times:
223
 
224
  ```python
225
- # Default: random split with temporal segments
226
- dataset = load_dataset("simplexsigil2/wanfall") # or "random"
227
-
228
- # Each example is a SEGMENT (not a video)
229
- example = dataset['train'][0]
230
- print(example['path']) # "fall/fall_ch_001"
231
- print(example['label']) # 1 (activity class ID)
232
- print(example['start']) # 0.0 (start time in seconds)
233
- print(example['end']) # 1.006 (end time in seconds)
234
- print(example['age_group']) # Demographic metadata
235
-
236
- # Dataset contains multiple segments per video
237
- print(f"Total segments in train: {len(dataset['train'])}") # 15,344 segments
238
- print(f"Unique videos: {len(set([ex['path'] for ex in dataset['train']]))}") # 9,600 videos
 
 
 
 
 
 
 
 
 
 
239
  ```
240
 
241
- **Key characteristics:**
242
- - **Sample = Temporal Segment** (one video can have multiple segments)
243
- - Each segment has `start` and `end` times
244
- - Train: 15,344 segments from 9,600 videos
245
- - Val: 1,927 segments from 1,200 videos
246
- - Test: 1,957 segments from 1,200 videos
247
 
248
- **Available split configs:**
249
- - `random` - 80/10/10 split (15,344/1,927/1,957 segments)
250
- - `cross_age` - Cross-age evaluation (6,267/3,762/9,199 segments)
251
- - `cross_ethnicity` - Cross-ethnicity evaluation (8,267/2,762/8,199 segments)
252
- - `cross_bmi` - Cross-BMI evaluation (9,675/4,701/4,852 segments)
253
 
254
- #### 2. **Frame-Wise Labels**
255
 
256
- Load dense frame-level labels where **each sample is a video** with 81 labels:
257
 
258
- ```python
259
- # Standalone: all 12,000 videos with frame-wise labels
260
- dataset = load_dataset("simplexsigil2/wanfall", "framewise")
 
 
261
 
262
- # With splits: random split with frame-wise labels
263
- dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
264
-
265
- # Each example is a VIDEO (not a segment)
266
- example = dataset['train'][0]
267
- print(example['path']) # "fall/fall_ch_001"
268
- print(example['frame_labels']) # [1, 1, 1, ..., 11, 11] (81 labels)
269
- print(len(example['frame_labels'])) # 81 frames
270
- print(example['age_group']) # Demographic metadata included
271
-
272
- # Dataset contains one sample per video
273
- print(f"Total videos in train: {len(dataset['train'])}") # 9,600 videos
274
  ```
275
 
276
- **Key characteristics:**
277
- - **Sample = Video** (one sample per video, no segments)
278
- - Each video has 81 frame labels (no start/end times)
279
- - Train: 9,600 videos
280
- - Val: 1,200 videos
281
- - Test: 1,200 videos
282
 
283
- **Key features:**
284
- - **81 labels per video** (one per frame @ 16fps)
285
- - **Works with all split configs**: Add `framewise=True` to any split
286
- - **Efficient**: 348KB compressed archive, automatically cached
287
- - **Complete metadata**: All demographic attributes included
288
 
289
- #### 3. **Paths Only Mode**
290
-
291
- Load only video paths for custom video loading:
 
 
292
 
293
  ```python
294
- # Minimal loading: only video paths
295
- dataset = load_dataset("simplexsigil2/wanfall", "random", paths_only=True)
296
-
297
- # Only contains paths
298
- example = dataset['train'][0]
299
- print(example) # {'path': 'fall/fall_ch_001'}
300
  ```
301
 
302
- #### 4. **All Segments** (No Splits)
 
 
303
 
304
- Load all 19,228 temporal segments without split partitions:
 
 
 
 
305
 
306
  ```python
307
- dataset = load_dataset("simplexsigil2/wanfall", "labels")
308
- all_segments = dataset['train'] # Single split with all segments
309
- print(f"Total segments: {len(all_segments)}") # 19,228 segments
310
-
311
- # Each sample is a segment (like config 1, but no train/val/test split)
312
- example = all_segments[0]
313
- print(f"Path: {example['path']}")
314
- print(f"Segment: {example['start']:.2f}s - {example['end']:.2f}s")
315
- print(f"Label: {example['label']}")
316
  ```
317
 
318
- #### 5. **Video Metadata Only**
 
 
319
 
320
- Load only video-level metadata (12,000 videos):
 
 
 
 
321
 
322
  ```python
323
- dataset = load_dataset("simplexsigil2/wanfall", "metadata")
324
- metadata = dataset['train'] # 12,000 videos
325
- print(f"Columns: {metadata.column_names}")
326
- # ['path', 'dataset', 'age_group', 'gender_presentation', ...]
327
  ```
328
 
329
- ### Complete Usage Examples
330
 
331
- #### Example 1: Training with Temporal Segments (Segment-Level)
332
 
333
- When using temporal segments, **each sample is a segment** with start/end times. Multiple segments can come from the same video.
334
 
 
335
  ```python
336
- from datasets import load_dataset
337
-
338
- # Load random split (segment-level samples)
339
  dataset = load_dataset("simplexsigil2/wanfall", "random")
 
 
340
 
341
- print(f"Training on {len(dataset['train'])} segments") # 15,344 segments
342
-
343
- # Training loop - each iteration is ONE SEGMENT
344
- for example in dataset['train']:
345
- video_path = example['path']
346
- activity_label = example['label'] # 0-15
347
- start_time = example['start']
348
- end_time = example['end']
349
-
350
- # Load only the frames for this segment
351
- # frames = load_video_segment(video_path, start_time, end_time)
352
- # model.train(frames, activity_label)
353
-
354
- # Note: The same video can appear multiple times with different segments
355
- # E.g., "fall/fall_ch_001" might have segments [0.0-1.0] and [1.0-5.0]
356
  ```
357
 
358
- #### Example 2: Training with Frame-Wise Labels (Video-Level)
359
-
360
- When using frame-wise labels, **each sample is a video** with 81 frame labels. Each video appears only once.
361
-
362
  ```python
363
- from datasets import load_dataset
364
-
365
- # Load random split with frame-wise labels (video-level samples)
366
  dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
 
 
367
 
368
- print(f"Training on {len(dataset['train'])} videos") # 9,600 videos
 
 
369
 
370
- # Training loop - each iteration is ONE VIDEO
371
- for example in dataset['train']:
372
- video_path = example['path']
373
- frame_labels = example['frame_labels'] # 81 labels (one per frame)
374
 
375
- # Load all frames from the video
376
- # frames = load_video(video_path) # Shape: (81, H, W, 3)
377
- # model.train(frames, frame_labels)
378
 
379
- # Note: Each video appears exactly once with its 81 frame labels
 
380
  ```
381
 
382
- #### Example 3: Cross-Demographic Evaluation
383
 
 
384
  ```python
385
- from datasets import load_dataset
386
-
387
- # Train on young adults, test on elderly
388
- cross_age = load_dataset("simplexsigil2/wanfall", "cross_age", framewise=True)
389
-
390
- # Train
391
- for example in cross_age['train']:
392
- age = cross_age['train'].features['age_group'].int2str(example['age_group'])
393
- print(f"Training on {age}") # "young_adults_18_34" or "middle_aged_35_64"
394
 
395
- # Test
396
- for example in cross_age['test']:
397
- age = cross_age['test'].features['age_group'].int2str(example['age_group'])
398
- print(f"Testing on {age}") # "elderly_65_plus", "children_5_12", etc.
399
  ```
400
 
401
- #### Example 4: Filtering by Demographics
402
-
403
  ```python
404
- from datasets import load_dataset
405
-
406
- # Load all segments
407
  dataset = load_dataset("simplexsigil2/wanfall", "labels")
408
  segments = dataset['train']
409
 
410
- # Access label feature for conversion
411
- label_feature = segments.features['label']
412
- age_feature = segments.features['age_group']
413
-
414
  # Filter elderly fall segments
415
  elderly_falls = [
416
  ex for ex in segments
417
- if age_feature.int2str(ex['age_group']) == 'elderly_65_plus'
418
- and ex['label'] == 1 # fall
419
  ]
420
-
421
- print(f"Found {len(elderly_falls)} elderly fall segments")
422
  ```
423
 
424
- ### Label Conversion
425
-
426
- Labels are stored as integers (0-15) but can be converted to strings:
427
-
428
  ```python
429
- dataset = load_dataset("simplexsigil2/wanfall", "random")
430
-
431
- # Get label feature
432
- label_feature = dataset['train'].features['label']
433
-
434
- # Convert integer to string
435
- label_name = label_feature.int2str(1) # "fall"
436
-
437
- # Convert string to integer
438
- label_id = label_feature.str2int("walk") # 0
439
 
440
- # Access all label names
441
- all_labels = label_feature.names
442
- print(all_labels) # ['walk', 'fall', 'fallen', ...]
443
  ```
444
 
445
- ### Cross-Demographic Evaluation Splits
446
-
447
- The dataset provides three cross-demographic split configurations for evaluating model robustness across different demographic groups:
448
-
449
- #### Cross-Age Split (`cross_age`)
450
- Evaluates model performance across different age groups:
451
- - **Train** (4,000 videos): Young adults (18-34) + Middle-aged (35-64)
452
- - **Validation** (2,000 videos): Teenagers (13-17)
453
- - **Test** (6,000 videos): Children (5-12) + Toddlers (1-4) + Elderly (65+)
454
-
455
- #### Cross-Ethnicity Split (`cross_ethnicity`)
456
- Evaluates model performance across different racial/ethnic groups with maximum phenotypic distance:
457
- - **Train** (5,178 videos): White + Asian + Hispanic/Latino
458
- - **Validation** (1,741 videos): American Indian and Alaska Native (AIAN)
459
- - **Test** (5,081 videos): Black + Middle Eastern/North African (MENA) + Native Hawaiian/Pacific Islander (NHPI)
460
-
461
- #### Cross-BMI Split (`cross_bmi`)
462
- Evaluates model performance across different body types:
463
- - **Train** (6,066 videos): Normal weight + Underweight
464
- - **Validation** (2,962 videos): Overweight
465
- - **Test** (2,972 videos): Obese
466
-
467
- ## Technical Properties
468
-
469
- ### Video Specifications
470
- - **Resolution**: Variable (synthetic generation)
471
- - **Duration**: 5.0625 seconds (consistent across all videos)
472
- - **Frame count**: 81 frames
473
- - **Frame rate**: 16 fps
474
- - **Format**: MP4 (not included in this dataset, videos must be obtained separately)
475
 
476
- ### Annotation Properties
477
- - **Temporal precision**: Sub-second (timestamps with decimal precision)
478
- - **Coverage**: Most frames are labeled, with some gaps
479
- - **Overlap handling**: Segments are annotated chronologically
480
- - **Activity sequences**: Natural transitions (e.g., walk → fall → fallen → stand_up)
481
 
482
- ## Motion Types
483
 
484
- Activities are classified into two main motion types:
485
-
486
- **Dynamic motions** (e.g., `walk`, `fall`, `stand_up`):
487
- - Labeled from the first frame where the motion begins
488
- - End when the person reaches a resting state
489
-
490
- **Static states** (e.g., `fallen`, `sitting`, `lying`):
491
- - Begin when person comes to rest in that posture
492
- - Continue until next motion begins
493
-
494
- ## Label Sequences
495
-
496
- Videos often contain natural sequences of activities:
497
- - **Fall sequence**: walk → fall → fallen → stand_up
498
- - **Sit sequence**: walk → sit_down → sitting → stand_up
499
- - **Lie sequence**: walk → lie_down → lying → stand_up
500
-
501
- Not all transitions include static states (e.g., a person might stand_up immediately after falling without a `fallen` state).
502
-
503
- ## Demographic Diversity
504
-
505
- The dataset includes rich demographic and scene metadata for every video, enabling bias analysis and cross-demographic evaluation.
506
- However, while age and gender and ethnicity are quite reliable with consistent generation, the attributes were merely provided with the generation prompts and due to model biases, the resulting videos can deviate.
507
-
508
- ### Overview
509
 
510
  ![Demographic Overview](figures/demographic_overview.png)
511
 
 
512
 
513
- ### Scene Variations
 
 
 
514
 
515
- Beyond demographic diversity, the dataset includes:
516
- - **Environment**: Indoor and outdoor settings
517
- - **Camera Angles**: Multiple elevations (eye, low, high, top), azimuths (front, rear, left, right), and distances
518
- - **Camera Shots**: Static wide and medium-wide compositions
519
- - **Frame Rates**: Various speeds (24fps, 25fps, 30fps, standard real-time)
520
 
521
  ## License
522
 
523
- The annotations and split definitions in this repository are released under [Creative Commons Attribution-NonCommercial 4.0 International License](https://creativecommons.org/licenses/by-nc/4.0/).
524
 
525
- The video data is synthetic and must be obtained separately from the original source, more information in the future.
 
71
 
72
  # WanFall: A Synthetic Activity Recognition Dataset
73
 
74
+ Synthetic activity recognition dataset with 12,000 videos focused on fall detection and activities of daily living. Features rich demographic metadata and multiple evaluation protocols for bias analysis.
75
 
76
+ **Status:** Under active development, subject to change.
 
 
 
 
 
 
 
 
 
 
 
77
 
78
  ## Dataset Statistics
79
 
80
+ | Property | Value |
81
+ |----------|-------|
82
+ | **Videos** | 12,000 (5.0625s each) |
83
+ | **Temporal Segments** | 19,228 |
84
+ | **Activity Classes** | 16 |
85
+ | **Frames per Video** | 81 frames @ 16fps |
86
+ | **Annotation Formats** | Temporal segments OR frame-wise labels |
87
+ | **Metadata Fields** | 12 (6 demographic + 6 scene) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
+ ## Quick Start
90
 
91
  ```python
92
  from datasets import load_dataset
93
 
94
+ # Random split with temporal segments (default)
95
  dataset = load_dataset("simplexsigil2/wanfall", "random")
96
 
97
+ # Random split with frame-wise labels (81 per video)
98
+ dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
 
99
 
100
+ # Cross-demographic evaluation
101
+ cross_age = load_dataset("simplexsigil2/wanfall", "cross_age")
 
 
 
102
  ```
103
 
104
+ ## Activity Classes
105
 
106
+ 16 activity classes covering falls, posture transitions, and static states:
107
 
108
+ ```python
109
+ LABEL_MAP = {
110
+ 0: "walk", # Walking movement, including jogging and running
111
+ 1: "fall", # Falling down action (loss of control)
112
+ 2: "fallen", # Person on ground after fall
113
+ 3: "sit_down", # Transition from standing to sitting
114
+ 4: "sitting", # Stationary sitting posture
115
+ 5: "lie_down", # Intentionally lying down (not falling)
116
+ 6: "lying", # Stationary lying posture
117
+ 7: "stand_up", # Getting up (to sitting or standing)
118
+ 8: "standing", # Stationary standing posture
119
+ 9: "other", # Unclassified activities
120
+ 10: "kneel_down", # Transition to kneeling
121
+ 11: "kneeling", # Stationary kneeling posture
122
+ 12: "squat_down", # Transition to squatting
123
+ 13: "squatting", # Stationary squatting posture
124
+ 14: "crawl", # Crawling movement on hands and knees
125
+ 15: "jump", # Jumping action
126
+ }
127
+ ```
128
 
129
+ **Motion Types:**
130
+ - **Dynamic** (0-3, 5, 7, 9-10, 12, 14-15): Transitions and movements
131
+ - **Static** (2, 4, 6, 8, 11, 13): Stationary postures
 
 
 
 
 
132
 
133
+ ## Data Format
134
 
135
+ ### CSV Columns (19 fields)
136
 
137
  ```python
138
+ # Core annotation fields
139
+ path # Video path (e.g., "fall/fall_ch_001")
140
+ label # Activity class ID (0-15)
141
+ start # Segment start time (seconds)
142
+ end # Segment end time (seconds)
143
+ subject # -1 (synthetic data)
144
+ cam # -1 (single view)
145
+ dataset # "wanfall"
146
+
147
+ # Demographic metadata (6 fields)
148
+ age_group # toddlers_1_4, children_5_12, teenagers_13_17, young_adults_18_34, middle_aged_35_64, elderly_65_plus
149
+ gender_presentation # male, female
150
+ monk_skin_tone # mst1-mst10 (Monk Skin Tone scale)
151
+ race_ethnicity_omb # white, black, asian, hispanic_latino, aian, nhpi, mena (OMB categories)
152
+ bmi_band # underweight, normal, overweight, obese
153
+ height_band # short, avg, tall
154
+
155
+ # Scene metadata (6 fields)
156
+ environment_category # indoor, outdoor
157
+ camera_shot # static_wide, static_medium_wide
158
+ speed # 24fps_rt, 25fps_rt, 30fps_rt, std_rt
159
+ camera_elevation # eye, low, high, top
160
+ camera_azimuth # front, rear, left, right
161
+ camera_distance # medium, far
162
  ```
163
 
164
+ **References:**
165
+ - [Monk Skin Tone Scale](https://skintone.google/the-scale) - 10-point inclusive skin tone representation
166
+ - [OMB Race/Ethnicity Standards](https://www.census.gov/newsroom/blogs/random-samplings/2024/04/updates-race-ethnicity-standards.html)
 
 
 
167
 
168
+ ## Split Configurations
 
 
 
 
169
 
170
+ ### 1. Random Split (80/10/10)
171
 
172
+ Standard baseline with random video assignment (seed 42).
173
 
174
+ | Split | Videos | Segments |
175
+ |-------|--------|----------|
176
+ | Train | 9,600 | 15,344 |
177
+ | Val | 1,200 | 1,956 |
178
+ | Test | 1,200 | 1,928 |
179
 
180
+ ```python
181
+ dataset = load_dataset("simplexsigil2/wanfall", "random")
 
 
 
 
 
 
 
 
 
 
182
  ```
183
 
184
+ ### 2. Cross-Age Split
 
 
 
 
 
185
 
186
+ Evaluates generalization across age groups. Train on adults, test on children and elderly.
 
 
 
 
187
 
188
+ | Split | Videos | Age Groups |
189
+ |-------|--------|------------|
190
+ | **Train** | 4,000 | `young_adults_18_34` (2,000)<br>`middle_aged_35_64` (2,000) |
191
+ | **Val** | 2,000 | `teenagers_13_17` (2,000) |
192
+ | **Test** | 6,000 | `children_5_12` (2,000)<br>`toddlers_1_4` (2,000)<br>`elderly_65_plus` (2,000) |
193
 
194
  ```python
195
+ dataset = load_dataset("simplexsigil2/wanfall", "cross_age")
 
 
 
 
 
196
  ```
197
 
198
+ ### 3. Cross-Ethnicity Split
199
+
200
+ Evaluates generalization across racial/ethnic groups with maximum phenotypic distance. Train on White/Asian/Hispanic, test on Black/MENA/NHPI.
201
 
202
+ | Split | Videos | Ethnicities |
203
+ |-------|--------|-------------|
204
+ | **Train** | 5,178 | `white` (1,709)<br>`asian` (1,691)<br>`hispanic_latino` (1,778) |
205
+ | **Val** | 1,741 | `aian` (1,741) |
206
+ | **Test** | 5,081 | `black` (1,684)<br>`mena` (1,680)<br>`nhpi` (1,717) |
207
 
208
  ```python
209
+ dataset = load_dataset("simplexsigil2/wanfall", "cross_ethnicity")
 
 
 
 
 
 
 
 
210
  ```
211
 
212
+ ### 4. Cross-BMI Split
213
+
214
+ Evaluates generalization across body types. Train on normal/underweight, test on obese.
215
 
216
+ | Split | Videos | BMI Bands |
217
+ |-------|--------|-----------|
218
+ | **Train** | 6,066 | `normal` (3,040)<br>`underweight` (3,026) |
219
+ | **Val** | 2,962 | `overweight` (2,962) |
220
+ | **Test** | 2,972 | `obese` (2,972) |
221
 
222
  ```python
223
+ dataset = load_dataset("simplexsigil2/wanfall", "cross_bmi")
 
 
 
224
  ```
225
 
226
+ **Note:** All cross-demographic splits contain the same videos, just organized differently. Total unique videos: 12,000.
227
 
228
+ ## Usage
229
 
230
+ ### Loading Modes
231
 
232
+ **Temporal Segments (default)** - Each sample is a segment with start/end times:
233
  ```python
 
 
 
234
  dataset = load_dataset("simplexsigil2/wanfall", "random")
235
+ # Train: 15,344 segments from 9,600 videos
236
+ # One video can have multiple segments
237
 
238
+ example = dataset['train'][0]
239
+ # {'path': 'fall/fall_ch_001', 'label': 1, 'start': 0.0, 'end': 1.006, ...}
 
 
 
 
 
 
 
 
 
 
 
 
 
240
  ```
241
 
242
+ **Frame-Wise Labels** - Each sample is a video with 81 frame labels:
 
 
 
243
  ```python
 
 
 
244
  dataset = load_dataset("simplexsigil2/wanfall", "random", framewise=True)
245
+ # Train: 9,600 videos with 81 labels each
246
+ # One sample per video
247
 
248
+ example = dataset['train'][0]
249
+ # {'path': 'fall/fall_ch_001', 'frame_labels': [1, 1, 1, ..., 11, 11], ...}
250
+ ```
251
 
252
+ **Additional Configs:**
253
+ ```python
254
+ # All segments (no splits)
255
+ dataset = load_dataset("simplexsigil2/wanfall", "labels") # 19,228 segments
256
 
257
+ # Video metadata only
258
+ dataset = load_dataset("simplexsigil2/wanfall", "metadata") # 12,000 videos
 
259
 
260
+ # Paths only (minimal)
261
+ dataset = load_dataset("simplexsigil2/wanfall", "random", paths_only=True)
262
  ```
263
 
264
+ ### Usage Examples
265
 
266
+ **Label Conversion:**
267
  ```python
268
+ dataset = load_dataset("simplexsigil2/wanfall", "random")
269
+ label_feature = dataset['train'].features['label']
 
 
 
 
 
 
 
270
 
271
+ label_name = label_feature.int2str(1) # "fall"
272
+ label_id = label_feature.str2int("walk") # 0
273
+ all_labels = label_feature.names # List all labels
 
274
  ```
275
 
276
+ **Filter by Demographics:**
 
277
  ```python
 
 
 
278
  dataset = load_dataset("simplexsigil2/wanfall", "labels")
279
  segments = dataset['train']
280
 
 
 
 
 
281
  # Filter elderly fall segments
282
  elderly_falls = [
283
  ex for ex in segments
284
+ if ex['age_group'] == 'elderly_65_plus' and ex['label'] == 1
 
285
  ]
 
 
286
  ```
287
 
288
+ **Cross-Demographic Evaluation:**
 
 
 
289
  ```python
290
+ cross_age = load_dataset("simplexsigil2/wanfall", "cross_age", framewise=True)
 
 
 
 
 
 
 
 
 
291
 
292
+ # Train contains only young_adults_18_34 and middle_aged_35_64
293
+ # Test contains children_5_12, toddlers_1_4, elderly_65_plus
 
294
  ```
295
 
296
+ ## Annotation Guidelines
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
297
 
298
+ **Motion Types:**
299
+ - **Dynamic** actions are labeled from first motion frame until resting state, if one motion is followed by another, the change occurs with the first frames which shows movement which is not explained by the previous action.
300
+ - **Static** states begin when person comes to rest, continue until next motion. Example for sitting: It does not start when the body touches the chair, but when the body looses its tension and comes to rest.
 
 
301
 
302
+ ## Demographic Distribution
303
 
304
+ Rich demographic and scene metadata enables bias analysis and cross-demographic evaluation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
305
 
306
  ![Demographic Overview](figures/demographic_overview.png)
307
 
308
+ **Note:** Metadata represents generation prompts. Due to generative model biases, actual visual attributes may deviate, particularly for ethnicity and body type. Age and gender are generally more reliable.
309
 
310
+ **Scene Variations:**
311
+ - Environments: Indoor/outdoor settings
312
+ - Camera angles: 4 elevations × 4 azimuths × 2 distances
313
+ - Shot types: Static wide and medium-wide
314
 
315
+ ## Vide Data
316
+
317
+ **Videos will be released at a later point of time and are currently NOT included in this repository.**
318
+ - **Video specs:** 5.0625s duration, 81 frames @ 16fps, MP4 format
319
+ - **Access:** Videos must be obtained separately (information forthcoming)
320
 
321
  ## License
322
 
323
+ [![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc/4.0/)
324
 
325
+ Annotations and metadata released under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). Video data is synthetic and subject to separate terms.