Update README.md
Browse files
README.md
CHANGED
@@ -57,6 +57,25 @@ size_categories:
|
|
57 |
## Abstract
|
58 |
We present a novel dataset aimed at advancing danger analysis and assessment by addressing the challenge of quantifying danger in video content and identifying how human-like a Large Language Model (LLM) evaluator is for the same. This is achieved by compiling a collection of 100 YouTube videos featuring various events. Each video is annotated by human participants who provided danger ratings on a scale from 0 (no danger to humans) to 10 (life-threatening), with precise timestamps indicating moments of heightened danger. Additionally, we leverage LLMs to independently assess the danger levels in these videos using video summaries. We introduce Mean Squared Error (MSE) scores for multimodal meta-evaluation of the alignment between human and LLM danger assessments. Our dataset not only contributes a new resource for danger assessment in video content but also demonstrates the potential of LLMs in achieving human-like evaluations.
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
## Cite us in your work
|
61 |
```
|
62 |
@misc{gupta2024vidasvisionbaseddangerassessment,
|
|
|
57 |
## Abstract
|
58 |
We present a novel dataset aimed at advancing danger analysis and assessment by addressing the challenge of quantifying danger in video content and identifying how human-like a Large Language Model (LLM) evaluator is for the same. This is achieved by compiling a collection of 100 YouTube videos featuring various events. Each video is annotated by human participants who provided danger ratings on a scale from 0 (no danger to humans) to 10 (life-threatening), with precise timestamps indicating moments of heightened danger. Additionally, we leverage LLMs to independently assess the danger levels in these videos using video summaries. We introduce Mean Squared Error (MSE) scores for multimodal meta-evaluation of the alignment between human and LLM danger assessments. Our dataset not only contributes a new resource for danger assessment in video content but also demonstrates the potential of LLMs in achieving human-like evaluations.
|
59 |
|
60 |
+
## How to download and use this dataset
|
61 |
+
The below code will download the metadata with the filenames, danger rating, and temporal coordinates.
|
62 |
+
```
|
63 |
+
from datasets import load_dataset
|
64 |
+
|
65 |
+
dataset = load_dataset("pranked03/ViDAS")
|
66 |
+
```
|
67 |
+
|
68 |
+
The below code will download the videos which can then be accessed using libraries like OpenCV.
|
69 |
+
```
|
70 |
+
i = 0 # could be value between 0 and 99.
|
71 |
+
|
72 |
+
from huggingface_hub import hf_hub_download
|
73 |
+
|
74 |
+
file_path = hf_hub_download(
|
75 |
+
repo_id="pranked03/ViDAS", filename=dataset["train"][0]["video_id"], repo_type="dataset"
|
76 |
+
)
|
77 |
+
```
|
78 |
+
|
79 |
## Cite us in your work
|
80 |
```
|
81 |
@misc{gupta2024vidasvisionbaseddangerassessment,
|