pranked03 commited on
Commit
a7a5b01
1 Parent(s): a663742

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -58,6 +58,7 @@ size_categories:
58
  We present a novel dataset aimed at advancing danger analysis and assessment by addressing the challenge of quantifying danger in video content and identifying how human-like a Large Language Model (LLM) evaluator is for the same. This is achieved by compiling a collection of 100 YouTube videos featuring various events. Each video is annotated by human participants who provided danger ratings on a scale from 0 (no danger to humans) to 10 (life-threatening), with precise timestamps indicating moments of heightened danger. Additionally, we leverage LLMs to independently assess the danger levels in these videos using video summaries. We introduce Mean Squared Error (MSE) scores for multimodal meta-evaluation of the alignment between human and LLM danger assessments. Our dataset not only contributes a new resource for danger assessment in video content but also demonstrates the potential of LLMs in achieving human-like evaluations.
59
 
60
  ## Cite us in your work
 
61
  @misc{gupta2024vidasvisionbaseddangerassessment,
62
  title={ViDAS: Vision-based Danger Assessment and Scoring},
63
  author={Pranav Gupta and Advith Krishnan and Naman Nanda and Ananth Eswar and Deeksha Agarwal and Pratham Gohil and Pratyush Goel},
@@ -66,4 +67,5 @@ We present a novel dataset aimed at advancing danger analysis and assessment by
66
  archivePrefix={arXiv},
67
  primaryClass={cs.CV},
68
  url={https://arxiv.org/abs/2410.00477},
69
- }
 
 
58
  We present a novel dataset aimed at advancing danger analysis and assessment by addressing the challenge of quantifying danger in video content and identifying how human-like a Large Language Model (LLM) evaluator is for the same. This is achieved by compiling a collection of 100 YouTube videos featuring various events. Each video is annotated by human participants who provided danger ratings on a scale from 0 (no danger to humans) to 10 (life-threatening), with precise timestamps indicating moments of heightened danger. Additionally, we leverage LLMs to independently assess the danger levels in these videos using video summaries. We introduce Mean Squared Error (MSE) scores for multimodal meta-evaluation of the alignment between human and LLM danger assessments. Our dataset not only contributes a new resource for danger assessment in video content but also demonstrates the potential of LLMs in achieving human-like evaluations.
59
 
60
  ## Cite us in your work
61
+ ```
62
  @misc{gupta2024vidasvisionbaseddangerassessment,
63
  title={ViDAS: Vision-based Danger Assessment and Scoring},
64
  author={Pranav Gupta and Advith Krishnan and Naman Nanda and Ananth Eswar and Deeksha Agarwal and Pratham Gohil and Pratyush Goel},
 
67
  archivePrefix={arXiv},
68
  primaryClass={cs.CV},
69
  url={https://arxiv.org/abs/2410.00477},
70
+ }
71
+ ```