|
--- |
|
language: |
|
- en |
|
size_categories: |
|
- 1K<n<10K |
|
task_categories: |
|
- summarization |
|
pretty_name: 'TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference |
|
Records' |
|
dataset_info: |
|
features: |
|
- name: doi |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: video_url |
|
dtype: string |
|
- name: license |
|
dtype: string |
|
- name: subject |
|
dtype: string |
|
- name: genre |
|
dtype: string |
|
- name: release_year |
|
dtype: string |
|
- name: author |
|
dtype: string |
|
- name: contributors |
|
dtype: string |
|
- name: abstract |
|
dtype: string |
|
- name: transcript |
|
dtype: string |
|
- name: transcript_segments |
|
sequence: |
|
- name: id |
|
dtype: int32 |
|
- name: seek |
|
dtype: int32 |
|
- name: start |
|
dtype: float32 |
|
- name: end |
|
dtype: float32 |
|
- name: text |
|
dtype: string |
|
- name: tokens |
|
sequence: int32 |
|
- name: temperature |
|
dtype: float32 |
|
- name: avg_logprob |
|
dtype: float32 |
|
- name: compression_ratio |
|
dtype: float32 |
|
- name: no_speech_prob |
|
dtype: float32 |
|
- name: keyframes |
|
sequence: |
|
- name: slide |
|
dtype: string |
|
- name: frames |
|
sequence: int32 |
|
- name: timestamp |
|
sequence: float32 |
|
- name: language |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 827419303 |
|
num_examples: 7282 |
|
- name: test |
|
num_bytes: 102381848 |
|
num_examples: 911 |
|
- name: valid |
|
num_bytes: 101368222 |
|
num_examples: 910 |
|
download_size: 501919138 |
|
dataset_size: 1031169373 |
|
pinned: true |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
- split: valid |
|
path: data/valid-* |
|
--- |
|
# Dataset Card for "TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records" |
|
|
|
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [Dataset page](https://huggingface.co/datasets/gigant/tib) |
|
- **Repository:** [Dataset page](https://huggingface.co/datasets/gigant/tib) |
|
- **Paper:** [TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records |
|
](https://hal.science/hal-04168911) |
|
- **Point of Contact:** [Théo Gigant](mailto:theo.gigant@l2s.centralesupelec.fr) |
|
|
|
## Dataset Summary |
|
|
|
TIB is an English dataset for abstractive summarization of multimodal presentations, introduced in [*TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records* |
|
](https://hal.science/hal-04168911). |
|
It is a collection of 9,103 videoconference records extracted from the German National Library of Science and Technology (TIB) archive, along with their metadata, an abstract and automatically processed transcripts and key frames. |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
- `summarization` |
|
|
|
### Languages |
|
|
|
The text in the dataset is in English, both for the transcripted audios and the abstracts. |
|
|
|
## Usage |
|
|
|
To use within the [`datasets`](https://github.com/huggingface/datasets) library: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("gigant/tib") |
|
``` |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
A typical data point represents a videoconference record, the `transcript` and `keyframes` are textual and visual modalities, processed from the video found at `video_url`, and the `abstract` is used as a target abstractive summary. |
|
|
|
### Data Fields |
|
|
|
Each record consist of the following attributes: |
|
* `doi`: digital object identifier (DOI) of the record or the associated paper |
|
* `title`: title of the presentation |
|
* `url`: URL of the record in the TIB archive |
|
* `video_url`: URL of the video file |
|
* `license`: license of the record |
|
* `subject`: academic field (*eg* Computer Science, Mathematics, ...) |
|
* `genre`: type of presentation (*eg* Lecture, Conference, ...) |
|
* `release_year`: year the record was released |
|
* `author`: name of the author |
|
* `contributors`: name of the contributors |
|
* `abstract`: the abstract of the presentation, that serve as a target summary |
|
* `transcript`: the automatically extracted transcript |
|
* `transcript_segments`: the automatically extracted transcript with time codes, output of the speech recognition system |
|
* `keyframes`: the automatically extracted key frames time codes |
|
|
|
`doi`, `title`, `url`, `video_url`, `license`, `subject`, `genre`, `release_year`, `author`, `contributors` and `abstract` are provided as found in the TIB archive. The length, style, quality and content of the abstract can differ from video to video as it was likely provided by each author. For instance, some abstracts can provide very short title-like summaries, introduction of the conference, the lecture or the speaker, or longer descriptions of the content. We provide examples of transcripts and summaries in the paper's Appendix. |
|
|
|
### Data Splits |
|
|
|
The data is split into a training, validation and test set. |
|
|
|
* Train: 7,282 (80%) |
|
* Validation: 910 (10%) |
|
* Test: 911 (10%) |
|
|
|
## Dataset Creation |
|
|
|
### Source Data |
|
|
|
#### Initial Data Collection and Normalization |
|
|
|
The dataset was first assembled by crawling the [TIB-AV portal](https://av.tib.eu/) which is a large archive of videos, developed by the German National Library of Science and Technology: *Technische Informationsbibliothek* (TIB). |
|
Entries with missing abstracts or abstracts that were too short (less than 30 characters) were filtered out. |
|
We also filtered out records for which the abstract or the transcript is in another language than English. |
|
In order to keep the abstracts that are relevant to the associated record, we removed documents if the abstract is the same as the abstract for another video. This allowed to get rid of all the abstracts that were written for a set of records such as conferences, instead of specifically written for a single presentation. |
|
|
|
More information about the dataset collection and filtering can be found in [TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records |
|
](https://hal.science/hal-04168911). |
|
|
|
### Dataset Curators |
|
|
|
The dataset was initially created by Théo Gigant, Frédéric Dufaux, Camille Guinaudeau and Marc Decombas. |
|
|
|
### Citation Information |
|
|
|
``` |
|
@inproceedings{gigant:hal-04168911, |
|
TITLE = {{TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records}}, |
|
AUTHOR = {GIGANT, Th{\'e}o and Dufaux, Fr{\'e}d{\'e}ric and Guinaudeau, Camille and Decombas, Marc}, |
|
URL = {https://hal.science/hal-04168911}, |
|
BOOKTITLE = {{Proc. 20th International Conference on Content-based Multimedia Indexing (CBMI 2023)}}, |
|
ADDRESS = {Orl{\'e}ans, France}, |
|
ORGANIZATION = {{ACM}}, |
|
YEAR = {2023}, |
|
MONTH = Sep, |
|
KEYWORDS = {multimedia dataset, multimodal documents, automatic summarization}, |
|
HAL_ID = {hal-04168911}, |
|
HAL_VERSION = {v1}, |
|
} |
|
``` |