File size: 6,870 Bytes
f68cfab
e6ab9fd
 
 
 
 
 
 
 
f68cfab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e6ab9fd
f68cfab
 
e6ab9fd
f68cfab
e6ab9fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f68cfab
8e5f7a1
f68cfab
8e5f7a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bf1ef69
 
 
 
 
 
 
 
 
 
8e5f7a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- summarization
pretty_name: 'TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference
  Records'
dataset_info:
  features:
  - name: doi
    dtype: string
  - name: title
    dtype: string
  - name: url
    dtype: string
  - name: video_url
    dtype: string
  - name: license
    dtype: string
  - name: subject
    dtype: string
  - name: genre
    dtype: string
  - name: release_year
    dtype: string
  - name: author
    dtype: string
  - name: contributors
    dtype: string
  - name: abstract
    dtype: string
  - name: transcript
    dtype: string
  - name: transcript_segments
    sequence:
    - name: id
      dtype: int32
    - name: seek
      dtype: int32
    - name: start
      dtype: float32
    - name: end
      dtype: float32
    - name: text
      dtype: string
    - name: tokens
      sequence: int32
    - name: temperature
      dtype: float32
    - name: avg_logprob
      dtype: float32
    - name: compression_ratio
      dtype: float32
    - name: no_speech_prob
      dtype: float32
  - name: keyframes
    sequence:
    - name: slide
      dtype: string
    - name: frames
      sequence: int32
    - name: timestamp
      sequence: float32
  - name: language
    dtype: string
  splits:
  - name: train
    num_bytes: 827419303
    num_examples: 7282
  - name: test
    num_bytes: 102381848
    num_examples: 911
  - name: valid
    num_bytes: 101368222
    num_examples: 910
  download_size: 501919138
  dataset_size: 1031169373
pinned: true
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: valid
    path: data/valid-*
---
# Dataset Card for "TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records"

[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)

## Dataset Description

- **Homepage:** [Dataset page](https://huggingface.co/datasets/gigant/tib)
- **Repository:** [Dataset page](https://huggingface.co/datasets/gigant/tib)
- **Paper:** [TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records
](https://hal.science/hal-04168911)
- **Point of Contact:** [Théo Gigant](mailto:theo.gigant@l2s.centralesupelec.fr)

## Dataset Summary

TIB is an English dataset for abstractive summarization of multimodal presentations, introduced in [*TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records*
](https://hal.science/hal-04168911).
It is a collection of 9,103 videoconference records extracted from the German National Library of Science and Technology (TIB) archive, along with their metadata, an abstract and automatically processed transcripts and key frames.

### Supported Tasks and Leaderboards

- `summarization`

### Languages

The text in the dataset is in English, both for the transcripted audios and the abstracts.

## Usage

To use within the [`datasets`](https://github.com/huggingface/datasets) library:

```python
from datasets import load_dataset

dataset = load_dataset("gigant/tib")
```

## Dataset Structure

### Data Instances

A typical data point represents a videoconference record, the `transcript` and `keyframes` are textual and visual modalities, processed from the video found at `video_url`, and the `abstract` is used as a target abstractive summary.

### Data Fields

Each record consist of the following attributes:
* `doi`: digital object identifier (DOI) of the record or the associated paper
* `title`: title of the presentation
* `url`: URL of the record in the TIB archive
* `video_url`: URL of the video file
* `license`: license of the record
* `subject`: academic field (*eg* Computer Science, Mathematics, ...)
* `genre`: type of presentation (*eg* Lecture, Conference, ...)
* `release_year`: year the record was released
* `author`: name of the author
* `contributors`: name of the contributors
* `abstract`: the abstract of the presentation, that serve as a target summary
* `transcript`: the automatically extracted transcript
* `transcript_segments`: the automatically extracted transcript with time codes, output of the speech recognition system
* `keyframes`: the automatically extracted key frames time codes

`doi`, `title`, `url`, `video_url`, `license`, `subject`, `genre`, `release_year`, `author`, `contributors` and `abstract` are provided as found in the TIB archive. The length, style, quality and content of the abstract can differ from video to video as it was likely provided by each author. For instance, some abstracts can provide very short title-like summaries, introduction of the conference, the lecture or the speaker, or longer descriptions of the content. We provide examples of transcripts and summaries in the paper's Appendix.

### Data Splits

The data is split into a training, validation and test set.

* Train: 7,282 (80%)
* Validation: 910 (10%)
* Test: 911 (10%)

## Dataset Creation

### Source Data

#### Initial Data Collection and Normalization

The dataset was first assembled by crawling the [TIB-AV portal](https://av.tib.eu/) which is a large archive of videos, developed by the German National Library of Science and Technology: *Technische Informationsbibliothek* (TIB).
Entries with missing abstracts or abstracts that were too short (less than 30 characters) were filtered out.
We also filtered out records for which the abstract or the transcript is in another language than English.
In order to keep the abstracts that are relevant to the associated record, we removed documents if the abstract is the same as the abstract for another video. This allowed to get rid of all the abstracts that were written for a set of records such as conferences, instead of specifically written for a single presentation.

More information about the dataset collection and filtering can be found in [TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records
](https://hal.science/hal-04168911).

### Dataset Curators

The dataset was initially created by Théo Gigant, Frédéric Dufaux, Camille Guinaudeau and Marc Decombas.

### Citation Information

```
@inproceedings{gigant:hal-04168911,
  TITLE = {{TIB: A Dataset for Abstractive Summarization of Long Multimodal Videoconference Records}},
  AUTHOR = {GIGANT, Th{\'e}o and Dufaux, Fr{\'e}d{\'e}ric and Guinaudeau, Camille and Decombas, Marc},
  URL = {https://hal.science/hal-04168911},
  BOOKTITLE = {{Proc. 20th International Conference on Content-based Multimedia Indexing (CBMI 2023)}},
  ADDRESS = {Orl{\'e}ans, France},
  ORGANIZATION = {{ACM}},
  YEAR = {2023},
  MONTH = Sep,
  KEYWORDS = {multimedia dataset, multimodal documents, automatic summarization},
  HAL_ID = {hal-04168911},
  HAL_VERSION = {v1},
}
```