The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/HuggingFaceM4/ActivitiyNet_Captions/ActivitiyNet_Captions.py or any data file in the same directory. Couldn't find 'HuggingFaceM4/ActivitiyNet_Captions' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in HuggingFaceM4/ActivitiyNet_Captions. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/HuggingFaceM4/ActivitiyNet_Captions/ActivitiyNet_Captions.py or any data file in the same directory. Couldn't find 'HuggingFaceM4/ActivitiyNet_Captions' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in HuggingFaceM4/ActivitiyNet_Captions.

Need help to make the dataset viewer work? Open a discussion for direct support.

YAML Metadata Warning: The task_categories "video-captionning" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Dataset Card for ActivityNet Captions

Dataset Summary

The ActivityNet Captions dataset connects videos to a series of temporally annotated sentence descriptions. Each sentence covers an unique segment of the video, describing multiple events that occur. These events may occur over very long or short periods of time and are not limited in any capacity, allowing them to co-occur. On average, each of the 20k videos contains 3.65 temporally localized sentences, resulting in a total of 100k sentences. We find that the number of sentences per video follows a relatively normal distribution. Furthermore, as the video duration increases, the number of sentences also increases. Each sentence has an average length of 13.48 words, which is also normally distributed. You can find more details of the dataset under the ActivityNet Captions Dataset section, and under supplementary materials in the paper.

Languages

The captions in the dataset are in English.

Dataset Structure

Data Fields

  • video_id : str unique identifier for the video
  • video_path: str Path to the video file -duration: float32 Duration of the video
  • captions_starts: List_float32 List of timestamps denoting the time at which each caption starts
  • captions_ends: List_float32 List of timestamps denoting the time at which each caption ends
  • en_captions: list_str List of english captions describing parts of the video

Data Splits

train validation test Overall
# of videos 10,009 4,917 4,885 19,811

Annotations

Quoting ActivityNet Captions' paper:
"Each annotation task was divided into two steps: (1) Writing a paragraph describing all major events happening in the videos in a paragraph, with each sentence of the paragraph describing one event, and (2) Labeling the start and end time in the video in which each sentence in the paragraph event occurred."

Who annotated the dataset?

Amazon Mechnical Turk annotators

Personal and Sensitive Information

Nothing specifically mentioned in the paper.

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Licensing Information

[More Information Needed]

Citation Information

@InProceedings{tgif-cvpr2016,
@inproceedings{krishna2017dense,
    title={Dense-Captioning Events in Videos},
    author={Krishna, Ranjay and Hata, Kenji and Ren, Frederic and Fei-Fei, Li and Niebles, Juan Carlos},
    booktitle={International Conference on Computer Vision (ICCV)},
    year={2017}
}

Contributions

Thanks to @leot13 for adding this dataset.

Downloads last month
3
Edit dataset card