Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    FileNotFoundError
Message:      [Errno 2] No such file or directory: "['zip://zip://:: 2016 - 2021/"
Traceback:    Traceback (most recent call last):
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 485, in compute_first_rows_response
                  rows = get_rows(
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 120, in decorator
                  return func(*args, **kwargs)
                File "/src/workers/datasets_based/src/datasets_based/workers/", line 176, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 917, in __iter__
                  for key, example in ex_iterable:
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 113, in __iter__
                  yield from self.generate_examples_fn(**self.kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/arpelarpe--nota/448345d1172edccb0d00c50faf3b80992254744165ae79ae12df7805ac8a57ee/", line 107, in _generate_examples
                  parent_directory_path = os.listdir(os.path.join(dl_path, parent_directory))
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/", line 70, in wrapper
                  return function(*args, use_auth_token=use_auth_token, **kwargs)
                File "/src/workers/datasets_based/.venv/lib/python3.9/site-packages/datasets/download/", line 521, in xlistdir
                  return os.listdir(path)
              FileNotFoundError: [Errno 2] No such file or directory: "['zip://zip://:: 2016 - 2021/"

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card Nota Lyd- og tekstdata

Dataset Summary

This data was created by the public institution Nota (, which is part of the Danish Ministry of Culture. Nota has a library audiobooks and audiomagazines for people with reading or sight disabilities. Nota also produces a number of audiobooks and audiomagazines themselves.

The dataset consists of .wav and .txt files from Nota's audiomagazines "Inspiration" and "Radio/TV".

The dataset has been published as a part of the initiative, within the Danish Agency for Digital Government (

336 GB available data, containing voice recordings and accompanying transcripts.

Each publication has been segmented into bits of 2 - 50 seconds .wav files with an accompanying transcription

Supported Tasks and Leaderboards

[Needs More Information]



Dataset Structure

Data Instances

A typical data point comprises the path to the audio file, called path and its sentence. {'path': '<path_to_clip>.wav', 'sentence': 'Dette er et eksempel', 'audio': {'path': <path_to_clip>.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100}

Data Fields

path: The path to the audio file

audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].

sentence: The sentence that was read by the speaker

Data Splits

The material has for now only a train split. As this is very early stage of the dataset, splits might be introduced at a later stage.

Dataset Creation


There might be smaller discrepancies between the .wav and .txt files. Therefore, there might be issues in the alignment of timestamps, text and sound files.

There are no strict rules as to how readers read aloud non-letter characters (i.e. numbers, €, $, !, ?). These symbols can be read differently throughout the dataset.

Curation Rationale

[Needs More Information]

Source Data

Initial Data Collection and Normalization

[Needs More Information]

Who are the source language producers?

[Needs More Information]


Annotation process

[Needs More Information]

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

The dataset is made public and free to use. Recorded individuals has by written contract accepted and agreed to the publication of their recordings. Other names appearing in the dataset are already publically known individuals (i.e. TV or Radio host names). Their names are not to be treated as sensitive or personal data in the context of this dataset.

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

Contact if you have questions regarding use of data. They gladly receive inputs and ideas on how to distribute the data.

Licensing Information


Downloads last month