The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    DataFilesNotFoundError
Message:      No (supported) data files found in PleIAs/YouTube-Commons
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 72, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1904, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1885, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1270, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 597, in infer_module_for_data_files
                  raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
              datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in PleIAs/YouTube-Commons

Need help to make the dataset viewer work? Open a discussion for direct support.

📺 YouTube-Commons 📺

YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC-By license.

Content

The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).

In total, this represents nearly 45 billion words (44,811,518,375).

All the videos where shared on YouTube with a CC-BY license: the dataset provide all the necessary provenance information including the title, link, channel name and upload date.

The corpus is multilingual with a majority of English-speaking content (71%) for original languages. Automated translations are provided for nearly all the videos in English, French, Spanish, German, Russian, Italian and Dutch.

Uses

The collection aims to expand the availability of conversational data for research in AI, computational social science and digital humanities.

Most of the available resources under free licenses are written texts such as public domain works or open science articles.

The text can be used for training model and republished with for reproducibility purposes.

License and ethics

All the transcripts are part of a video shared under a CC-By license. In accordance with the provision of the license, every YouTube channels is fully credited.

While content under a free license can be lawfully reproduced in any setting, there is currently a debate over the legitimacy and proper ethical use of free content for pre-training large language models.

In accordance with the philosophy of Creative Commons, we recommend that this set be preferably used for open research. Furthermore, the license requires that contribution of each individual author is properly credited. In a research context, the best way to achieve this aim would be to fully release the data sources used for training or, at the very least, provide an extensive open documentation.

Future developments

The collection is far from covering the total amount of available YouTube videos under a Creative Commons license. We will continue to expand it significantly.

Other additional release will also focus on transcripts from other video sources not available on YouTube (especially from public service/university websites).

Acknowledgements

The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).

Pleias corpus collection projects have been also facilitated thanks to the open science LLM community support, insights and cooperation (Occiglot, Eleuther AI, Allen AI).

Downloads last month
235

Models trained or fine-tuned on PleIAs/YouTube-Commons