Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
An error occurred while generating the dataset
Error code:   UnexpectedError

Need help to make the dataset viewer work? Open a discussion for direct support.

url
string
repository_url
string
labels_url
string
comments_url
string
events_url
string
html_url
string
id
int64
node_id
string
number
int64
title
string
user
dict
labels
list
state
string
locked
bool
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
updated_at
int64
closed_at
int64
author_association
string
active_lock_reason
null
body
string
reactions
dict
timeline_url
string
performed_via_github_app
null
state_reason
null
draft
bool
pull_request
dict
is_pull_request
bool
https://api.github.com/repos/huggingface/datasets/issues/4906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4906/comments
https://api.github.com/repos/huggingface/datasets/issues/4906/events
https://github.com/huggingface/datasets/issues/4906
1,353,223,925
I_kwDODunzps5QqI71
4,906
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
{ "login": "OPterminator", "id": 63536981, "node_id": "MDQ6VXNlcjYzNTM2OTgx", "avatar_url": "https://avatars.githubusercontent.com/u/63536981?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OPterminator", "html_url": "https://github.com/OPterminator", "followers_url": "https://api.github.com/users/OPterminator/followers", "following_url": "https://api.github.com/users/OPterminator/following{/other_user}", "gists_url": "https://api.github.com/users/OPterminator/gists{/gist_id}", "starred_url": "https://api.github.com/users/OPterminator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OPterminator/subscriptions", "organizations_url": "https://api.github.com/users/OPterminator/orgs", "repos_url": "https://api.github.com/users/OPterminator/repos", "events_url": "https://api.github.com/users/OPterminator/events{/privacy}", "received_events_url": "https://api.github.com/users/OPterminator/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,661,653,404,000
1,661,653,842,000
null
NONE
null
## Describe the bug A clear and concise description of what the bug is. Not able to import datasets ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import os os.environ["WANDB_API_KEY"] = "0" ## to silence warning import numpy as np import random import sklearn import matplotlib.pyplot as plt import pandas as pd import sys import tensorflow as tf import plotly.express as px import transformers import tokenizers import nlp as nlp import utils import datasets ``` ## Expected results A clear and concise description of the expected results. import should work normal ## Actual results Specify the actual results or traceback. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-21-b3b5b0b62103> in <module> 13 import nlp as nlp 14 import utils ---> 15 import datasets ~\anaconda3\lib\site-packages\datasets\__init__.py in <module> 44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled 45 from .info import DatasetInfo, MetricInfo ---> 46 from .inspect import ( 47 get_dataset_config_info, 48 get_dataset_config_names, ~\anaconda3\lib\site-packages\datasets\inspect.py in <module> 28 from .download.streaming_download_manager import StreamingDownloadManager 29 from .info import DatasetInfo ---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory 31 from .utils.file_utils import relative_to_absolute_path 32 from .utils.logging import get_logger ~\anaconda3\lib\site-packages\datasets\load.py in <module> 53 from .iterable_dataset import IterableDataset 54 from .metric import Metric ---> 55 from .packaged_modules import ( 56 _EXTENSION_TO_MODULE, 57 _MODULE_SUPPORTS_METADATA, ~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module> 4 from typing import List 5 ----> 6 from .csv import csv 7 from .imagefolder import imagefolder 8 from .json import json ~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module> 13 14 ---> 15 logger = datasets.utils.logging.get_logger(__name__) 16 17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"] AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.4.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.8.8 - PyArrow version: 9.0.0 - Pandas version: 1.2.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4906/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4904/comments
https://api.github.com/repos/huggingface/datasets/issues/4904/events
https://github.com/huggingface/datasets/pull/4904
1,353,002,837
PR_kwDODunzps4959Ad
4,904
[LibriSpeech] Fix dev split local_extracted_archive for 'all' split
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4904). All of your documentation changes will be reflected on that endpoint." ]
1,661,594,697,000
1,661,595,152,000
null
CONTRIBUTOR
null
We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61 These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`. However, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L212 https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L219 The consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`. When defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L259-L263 Thus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`). This PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4904/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4904", "html_url": "https://github.com/huggingface/datasets/pull/4904", "diff_url": "https://github.com/huggingface/datasets/pull/4904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4904.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4903/comments
https://api.github.com/repos/huggingface/datasets/issues/4903/events
https://github.com/huggingface/datasets/pull/4903
1,352,539,075
PR_kwDODunzps494aud
4,903
Fix CI reporting
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661,534,190,000
1,661,536,173,000
1,661,536,019,000
MEMBER
null
Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary. This PR fixes a regression introduced by: - #4845 This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4903/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4903", "html_url": "https://github.com/huggingface/datasets/pull/4903", "diff_url": "https://github.com/huggingface/datasets/pull/4903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4903.patch", "merged_at": 1661536019000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4902/comments
https://api.github.com/repos/huggingface/datasets/issues/4902/events
https://github.com/huggingface/datasets/issues/4902
1,352,469,196
I_kwDODunzps5QnQrM
4,902
Name the default config `default`
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[]
1,661,530,582,000
1,661,530,598,000
null
CONTRIBUTOR
null
Currently, if a dataset has no configuration, a default configuration is created from the dataset name. For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`. It might be easier to handle to set it to `default`, or another reserved word.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4902/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4901/comments
https://api.github.com/repos/huggingface/datasets/issues/4901/events
https://github.com/huggingface/datasets/pull/4901
1,352,438,915
PR_kwDODunzps494FNX
4,901
Raise ManualDownloadError from get_dataset_config_info
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4901). All of your documentation changes will be reflected on that endpoint." ]
1,661,528,756,000
1,661,535,355,000
null
MEMBER
null
This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download. Related to: - #4898 CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4901/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4901", "html_url": "https://github.com/huggingface/datasets/pull/4901", "diff_url": "https://github.com/huggingface/datasets/pull/4901.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4901.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4900/comments
https://api.github.com/repos/huggingface/datasets/issues/4900/events
https://github.com/huggingface/datasets/issues/4900
1,352,405,855
I_kwDODunzps5QnBNf
4,900
Dataset Viewer issue for asaxena1990/Dummy_dataset
{ "login": "ankurcl", "id": 56627657, "node_id": "MDQ6VXNlcjU2NjI3NjU3", "avatar_url": "https://avatars.githubusercontent.com/u/56627657?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankurcl", "html_url": "https://github.com/ankurcl", "followers_url": "https://api.github.com/users/ankurcl/followers", "following_url": "https://api.github.com/users/ankurcl/following{/other_user}", "gists_url": "https://api.github.com/users/ankurcl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankurcl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankurcl/subscriptions", "organizations_url": "https://api.github.com/users/ankurcl/orgs", "repos_url": "https://api.github.com/users/ankurcl/repos", "events_url": "https://api.github.com/users/ankurcl/events{/privacy}", "received_events_url": "https://api.github.com/users/ankurcl/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Seems to be linked to the use of the undocumented `_resolve_features` method in the dataset viewer backend:\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"asaxena1990/Dummy_dataset\", name=\"asaxena1990--Dummy_dataset\", split=\"train\", streaming=True)\r\nUsing custom data configuration asaxena1990--Dummy_dataset-4a704ed7e5627563\r\n>>> dataset._resolve_features()\r\nFailed to read file 'https://huggingface.co/datasets/asaxena1990/Dummy_dataset/resolve/06885879a8bdd767d2d27695484fc6c83244617a/dummy_dataset_train.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column() changed from object to array in row 0\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 109, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 1261, in _resolve_features\r\n features = _infer_features_from_batch(self._head())\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 686, in _head\r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 686, in <listcomp>\r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 708, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 112, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 651, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 137, in _generate_tables\r\n f\"This JSON file contain the following fields: {str(list(dataset.keys()))}. \"\r\nAttributeError: 'list' object has no attribute 'keys'\r\n```\r\n\r\nPinging @huggingface/datasets", "Hi ! JSON files containing a list of object are not supported yet, you can use JSON Lines files instead in the meantime\r\n```json\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n...\r\n```" ]
1,661,526,944,000
1,661,532,491,000
null
NONE
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4900/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4899/comments
https://api.github.com/repos/huggingface/datasets/issues/4899/events
https://github.com/huggingface/datasets/pull/4899
1,352,031,286
PR_kwDODunzps492uTO
4,899
Re-add code and und language tags
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661,507,337,000
1,661,509,638,000
1,661,509,460,000
MEMBER
null
This PR fixes the removal of 2 language tags done by: - #4882 The tags are: - "code": this is not a IANA tag but needed - "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af - used in "mc4" and "udhr" datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4899/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4899", "html_url": "https://github.com/huggingface/datasets/pull/4899", "diff_url": "https://github.com/huggingface/datasets/pull/4899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4899.patch", "merged_at": 1661509460000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4898/comments
https://api.github.com/repos/huggingface/datasets/issues/4898/events
https://github.com/huggingface/datasets/issues/4898
1,351,851,254
I_kwDODunzps5Qk5z2
4,898
Dataset Viewer issue for timit_asr
{ "login": "InayatUllah932", "id": 91126978, "node_id": "MDQ6VXNlcjkxMTI2OTc4", "avatar_url": "https://avatars.githubusercontent.com/u/91126978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/InayatUllah932", "html_url": "https://github.com/InayatUllah932", "followers_url": "https://api.github.com/users/InayatUllah932/followers", "following_url": "https://api.github.com/users/InayatUllah932/following{/other_user}", "gists_url": "https://api.github.com/users/InayatUllah932/gists{/gist_id}", "starred_url": "https://api.github.com/users/InayatUllah932/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/InayatUllah932/subscriptions", "organizations_url": "https://api.github.com/users/InayatUllah932/orgs", "repos_url": "https://api.github.com/users/InayatUllah932/repos", "events_url": "https://api.github.com/users/InayatUllah932/events{/privacy}", "received_events_url": "https://api.github.com/users/InayatUllah932/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/timit_asr/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2/timit_asr.py\", line 117, in _split_generators\r\n data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/posixpath.py\", line 231, in expanduser\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\ncc @huggingface/datasets ", "Due to license restriction, this dataset needs manual downloading of the original data.\r\n\r\nThis information is in the dataset card: https://huggingface.co/datasets/timit_asr\r\n> The dataset needs to be downloaded manually from https://catalog.ldc.upenn.edu/LDC93S1", "Maybe a better error message for datasets that need manual downloading? @severo \r\n\r\nMaybe we can raise a specific excpetion as done from `load_dataset`...", "Yes, ideally something like https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L81\r\n" ]
1,661,497,925,000
1,661,526,229,000
null
NONE
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4898/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4897/comments
https://api.github.com/repos/huggingface/datasets/issues/4897/events
https://github.com/huggingface/datasets/issues/4897
1,351,784,727
I_kwDODunzps5QkpkX
4,897
datasets generate large arrow file
{ "login": "osayes", "id": 18533904, "node_id": "MDQ6VXNlcjE4NTMzOTA0", "avatar_url": "https://avatars.githubusercontent.com/u/18533904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osayes", "html_url": "https://github.com/osayes", "followers_url": "https://api.github.com/users/osayes/followers", "following_url": "https://api.github.com/users/osayes/following{/other_user}", "gists_url": "https://api.github.com/users/osayes/gists{/gist_id}", "starred_url": "https://api.github.com/users/osayes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osayes/subscriptions", "organizations_url": "https://api.github.com/users/osayes/orgs", "repos_url": "https://api.github.com/users/osayes/repos", "events_url": "https://api.github.com/users/osayes/events{/privacy}", "received_events_url": "https://api.github.com/users/osayes/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,661,493,076,000
1,661,493,076,000
null
NONE
null
Checking the large file in disk, and found the large cache file in the cifar10 data directory: ![image](https://user-images.githubusercontent.com/18533904/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png) As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4897/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4896/comments
https://api.github.com/repos/huggingface/datasets/issues/4896/events
https://github.com/huggingface/datasets/pull/4896
1,351,180,409
PR_kwDODunzps49z4fU
4,896
Fix missing tags in dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661,445,703,000
1,661,489,070,000
1,661,488,908,000
MEMBER
null
Fix missing tags in dataset cards. This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4896/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4896", "html_url": "https://github.com/huggingface/datasets/pull/4896", "diff_url": "https://github.com/huggingface/datasets/pull/4896.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4896.patch", "merged_at": 1661488908000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4895/comments
https://api.github.com/repos/huggingface/datasets/issues/4895/events
https://github.com/huggingface/datasets/issues/4895
1,350,798,527
I_kwDODunzps5Qg4y_
4,895
load_dataset method returns Unknown split "validation" even if this dir exists
{ "login": "SamSamhuns", "id": 13418507, "node_id": "MDQ6VXNlcjEzNDE4NTA3", "avatar_url": "https://avatars.githubusercontent.com/u/13418507?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamSamhuns", "html_url": "https://github.com/SamSamhuns", "followers_url": "https://api.github.com/users/SamSamhuns/followers", "following_url": "https://api.github.com/users/SamSamhuns/following{/other_user}", "gists_url": "https://api.github.com/users/SamSamhuns/gists{/gist_id}", "starred_url": "https://api.github.com/users/SamSamhuns/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamSamhuns/subscriptions", "organizations_url": "https://api.github.com/users/SamSamhuns/orgs", "repos_url": "https://api.github.com/users/SamSamhuns/repos", "events_url": "https://api.github.com/users/SamSamhuns/events{/privacy}", "received_events_url": "https://api.github.com/users/SamSamhuns/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,661,429,460,000
1,661,430,201,000
null
NONE
null
## Describe the bug The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path. The data directories are as follows and attached to this issue: ``` test_data1 |_ train |_ 1012.png |_ metadata.jsonl ... |_ test ... |_ validation |_ 234.png |_ metadata.jsonl ... test_data2 |_ train |_ train_1012.png |_ metadata.jsonl ... |_ test ... |_ validation |_ val_234.png |_ metadata.jsonl ... ``` They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e. `train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png` I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split? ## Steps to reproduce the bug ```python import datasets datasets.logging.set_verbosity_error() from datasets import load_dataset, get_dataset_split_names # the following only finds train, validation and test splits correctly path = "./test_data1" print("######################", get_dataset_split_names(path), "######################") dataset_list = [] for spt in ["train", "test", "validation"]: dataset = load_dataset(path, split=spt) dataset_list.append(dataset) # the following only finds train and test splits path = "./test_data2" print("######################", get_dataset_split_names(path), "######################") dataset_list = [] for spt in ["train", "test", "validation"]: dataset = load_dataset(path, split=spt) dataset_list.append(dataset) ``` ## Expected results ``` ###################### ['train', 'test', 'validation'] ###################### ###################### ['train', 'test', 'validation'] ###################### ``` ## Actual results ``` Traceback (most recent call last): File "test_data_loader.py", line 11, in <module> dataset = load_dataset(path, split=spt) File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset datasets = map_nested( File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested return function(data_struct) File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset ds = self._as_dataset( File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset dataset_kwargs = ArrowReader(self._cache_dir, self.info).read( File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read files = self.get_file_instructions(name, instructions, split_infos) File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions file_instructions = make_file_instructions( File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions absolute_instructions = instruction.to_absolute(name2len) File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions] File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp> return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions] File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.') ValueError: Unknown split "validation". Should be one of ['train', 'test']. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux Ubuntu 18.04 - Python version: 3.8.12 - PyArrow version: 9.0.0 Data files [test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip) [test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4895/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4894/comments
https://api.github.com/repos/huggingface/datasets/issues/4894/events
https://github.com/huggingface/datasets/pull/4894
1,350,667,270
PR_kwDODunzps49yIvr
4,894
Add citation information to makhzan dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661,422,600,000
1,661,433,722,000
1,661,433,581,000
MEMBER
null
This PR adds the citation information to `makhzan` datasets, once they have replied to our request for that information: - https://github.com/zeerakahmed/makhzan/issues/43
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4894/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4894", "html_url": "https://github.com/huggingface/datasets/pull/4894", "diff_url": "https://github.com/huggingface/datasets/pull/4894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4894.patch", "merged_at": 1661433581000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4893/comments
https://api.github.com/repos/huggingface/datasets/issues/4893/events
https://github.com/huggingface/datasets/issues/4893
1,350,655,674
I_kwDODunzps5QgV66
4,893
Oversampling strategy for iterable datasets in `interleave_datasets`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[ { "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n", "Great @ylacombe thanks ! I'm assigning you this issue" ]
1,661,422,015,000
1,661,442,133,000
null
MEMBER
null
In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects. It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy ```python >>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable >>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {})) >>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {})) >>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {})) >>> dataset = interleave_datasets([d1, d2, d3]) # is supported >>> [x["a"] for x in dataset] [0, 10, 20, 1, 11, 21, 2, 12, 22] >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet >>> [x["a"] for x in dataset] [0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24] ``` This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py` I would be happy to share some guidance if anyone would like to give it a shot :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4893/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4892/comments
https://api.github.com/repos/huggingface/datasets/issues/4892/events
https://github.com/huggingface/datasets/pull/4892
1,350,636,499
PR_kwDODunzps49yCD3
4,892
Add citation to ro_sts and ro_sts_parallel datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4892). All of your documentation changes will be reflected on that endpoint." ]
1,661,421,066,000
1,661,424,596,000
1,661,424,596,000
MEMBER
null
This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information: - https://github.com/dumitrescustefan/RO-STS/issues/4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4892/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4892", "html_url": "https://github.com/huggingface/datasets/pull/4892", "diff_url": "https://github.com/huggingface/datasets/pull/4892.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4892.patch", "merged_at": 1661424596000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4891/comments
https://api.github.com/repos/huggingface/datasets/issues/4891/events
https://github.com/huggingface/datasets/pull/4891
1,350,589,813
PR_kwDODunzps49x382
4,891
Fix missing tags in dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,661,418,857,000
1,661,435,015,000
1,661,435,014,000
MEMBER
null
Fix missing tags in dataset cards. This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4891/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4891", "html_url": "https://github.com/huggingface/datasets/pull/4891", "diff_url": "https://github.com/huggingface/datasets/pull/4891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4891.patch", "merged_at": 1661435014000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4890/comments
https://api.github.com/repos/huggingface/datasets/issues/4890/events
https://github.com/huggingface/datasets/pull/4890
1,350,578,029
PR_kwDODunzps49x1YC
4,890
add Dataset.from_list
{ "login": "sanderland", "id": 48946947, "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanderland", "html_url": "https://github.com/sanderland", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "organizations_url": "https://api.github.com/users/sanderland/orgs", "repos_url": "https://api.github.com/users/sanderland/repos", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "received_events_url": "https://api.github.com/users/sanderland/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4890). All of your documentation changes will be reflected on that endpoint.", "@albertvillanova it seems tests fail on pyarrow 6, perhaps from_pylist is a v7 method? How do you usually handle these version differences?\r\nAdded something that at least works" ]
1,661,418,358,000
1,661,437,816,000
null
NONE
null
As discussed in #4885 I initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict. However, it seems the constructor takes care of filling info when it is empty. ``` if info.features is None: info.features = Features( { col: generate_from_arrow_type(coldata.type) for col, coldata in zip(pa_table.column_names, pa_table.columns) } ) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4890/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4890", "html_url": "https://github.com/huggingface/datasets/pull/4890", "diff_url": "https://github.com/huggingface/datasets/pull/4890.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4890.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4889/comments
https://api.github.com/repos/huggingface/datasets/issues/4889/events
https://github.com/huggingface/datasets/issues/4889
1,349,758,525
I_kwDODunzps5Qc649
4,889
torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.", "torchaudio did a change in [0.12](https://github.com/pytorch/audio/releases/tag/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 decoding is now handled by FFmpeg in sox_io backend. (https://github.com/pytorch/audio/pull/2419, https://github.com/pytorch/audio/pull/2428)\r\n> - FFmpeg is now used as fallback in sox_io backend, and now MP3 decoding is handled by FFmpeg. To load MP3 audio with torchaudio.load, please install a compatible version of FFmpeg (Version 4 when using an official binary distribution).\r\n> - Note that, whereas the previous MP3 decoding scheme pads the output audio, the new scheme does not. As a consequence, the new version returns shorter audio tensors." ]
1,661,360,083,000
1,661,361,068,000
null
MEMBER
null
## Describe the bug When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749 ## Steps to reproduce the bug If you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers. ```python #!/usr/bin/env python3 from datasets import load_dataset import datasets import numpy as np import torch import torchaudio print("torch vesion", torch.__version__) print("torchaudio vesion", torchaudio.__version__) save_audio = True load_audios = False if save_audio: ds = load_dataset("common_voice", "en", split="train", streaming=True) ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000)) ds_iter = iter(ds) sample = next(ds_iter) np.save(f"audio_sample_{torch.__version__}", sample["audio"]["array"]) print(sample["audio"]["array"]) if load_audios: array_torch_11 = np.load("/home/patrick/audio_sample_1.11.0+cu102.npy") print("Array 11 Shape", array_torch_11.shape) print("Array 11 abs sum", np.sum(np.abs(array_torch_11))) array_torch_12 = np.load("/home/patrick/audio_sample_1.12.1+cu102.npy") print("Array 12 Shape", array_torch_12.shape) print("Array 12 abs sum", np.sum(np.abs(array_torch_12))) ``` Having saved the tensors the print output yields: ``` torch vesion 1.12.1+cu102 torchaudio vesion 0.12.1+cu102 Array 11 Shape (122880,) Array 11 abs sum 1396.4988 Array 12 Shape (123264,) Array 12 abs sum 1396.5193 ``` ## Expected results torchaudio 11.0 and 12.1 should yield same results. ## Actual results See above. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.1.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4889/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4888/comments
https://api.github.com/repos/huggingface/datasets/issues/4888/events
https://github.com/huggingface/datasets/issues/4888
1,349,447,521
I_kwDODunzps5Qbu9h
4,888
Dataset Viewer issue for subjqa
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it." ]
1,661,347,580,000
1,661,348,104,000
null
MEMBER
null
### Link https://huggingface.co/datasets/subjqa ### Description Getting the following error for this dataset: ``` Status code: 500 Exception: Status500Error Message: 2 or more items returned, instead of 1 ``` Not sure what's causing it though 🤔 ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4888/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4887/comments
https://api.github.com/repos/huggingface/datasets/issues/4887/events
https://github.com/huggingface/datasets/pull/4887
1,349,426,693
PR_kwDODunzps49t_PM
4,887
Add "cc-by-nc-sa-2.0" to list of licenses
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry for the issue @albertvillanova! I think it's now fixed! :heart: " ]
1,661,346,709,000
1,661,509,892,000
1,661,509,760,000
MEMBER
null
Datasets side of https://github.com/huggingface/hub-docs/pull/285
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4887/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4887", "html_url": "https://github.com/huggingface/datasets/pull/4887", "diff_url": "https://github.com/huggingface/datasets/pull/4887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4887.patch", "merged_at": 1661509760000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4886/comments
https://api.github.com/repos/huggingface/datasets/issues/4886/events
https://github.com/huggingface/datasets/issues/4886
1,349,285,569
I_kwDODunzps5QbHbB
4,886
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
{ "login": "JeanKaddour", "id": 11850255, "node_id": "MDQ6VXNlcjExODUwMjU1", "avatar_url": "https://avatars.githubusercontent.com/u/11850255?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JeanKaddour", "html_url": "https://github.com/JeanKaddour", "followers_url": "https://api.github.com/users/JeanKaddour/followers", "following_url": "https://api.github.com/users/JeanKaddour/following{/other_user}", "gists_url": "https://api.github.com/users/JeanKaddour/gists{/gist_id}", "starred_url": "https://api.github.com/users/JeanKaddour/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JeanKaddour/subscriptions", "organizations_url": "https://api.github.com/users/JeanKaddour/orgs", "repos_url": "https://api.github.com/users/JeanKaddour/repos", "events_url": "https://api.github.com/users/JeanKaddour/events{/privacy}", "received_events_url": "https://api.github.com/users/JeanKaddour/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,661,340,261,000
1,661,340,261,000
null
NONE
null
## Describe the bug Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('huggan/CelebA-HQ') ``` ## Expected results See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd ## Actual results ``` File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module> dataset = load_dataset('huggan/CelebA-HQ') File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset builder_instance.download_and_prepare( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split for key, table in logging.tqdm( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables parquet_file = pq.ParquetFile(f) File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-2.4.1.dev0 - Platform: Ubuntu 18.04 - Python version: 3.10 - PyArrow version: pyarrow 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4886/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4885/comments
https://api.github.com/repos/huggingface/datasets/issues/4885/events
https://github.com/huggingface/datasets/issues/4885
1,349,181,448
I_kwDODunzps5QauAI
4,885
Create dataset from list of dicts
{ "login": "sanderland", "id": 48946947, "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanderland", "html_url": "https://github.com/sanderland", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "organizations_url": "https://api.github.com/users/sanderland/orgs", "repos_url": "https://api.github.com/users/sanderland/repos", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "received_events_url": "https://api.github.com/users/sanderland/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi @sanderland, thanks for your enhancement proposal.\r\n\r\nI agree with you that this would be useful.\r\n\r\nPlease note that under the hood, we use PyArrow tables as backend:\r\n- The implementation of `Dataset.from_dict` uses the PyArrow `Table.from_pydict`\r\n\r\nTherefore, I would suggest:\r\n- Implementing `Dataset.from_list` using the PyArrow `Table.from_pylist`\r\n\r\nWhat do you think?\r\nLet's see if other people have other suggestions...", "Thanks for the quick and positive reply @albertvillanova! \r\n`from_list` seems sensible. Have opened a PR so we can discuss details there." ]
1,661,335,284,000
1,661,419,374,000
null
NONE
null
I often find myself with data from a variety of sources, and a list of dicts is very common among these. However, converting this to a Dataset is a little awkward, requiring either ```Dataset.from_pandas(pd.DataFrame(formatted_training_data))``` Which can error out on some more exotic values as 2-d arrays for reasons that are not entirely clear > ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column labels with type object') Alternatively: ```Dataset.from_dict({k: [s[k] for s in formatted_training_data] for k in formatted_training_data[0].keys()})``` Which works, but is a little ugly. **Describe the solution you'd like** Either `.from_dict` accepting a list of dicts, or a `.from_records` function accepting such. I am happy to PR this, just wanted to check you are happy to accept this I haven't missed something obvious, and which of the solutions would be prefered.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4885/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4884/comments
https://api.github.com/repos/huggingface/datasets/issues/4884/events
https://github.com/huggingface/datasets/pull/4884
1,349,105,946
PR_kwDODunzps49s6Aj
4,884
Fix documentation card of math_qa dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4884). All of your documentation changes will be reflected on that endpoint." ]
1,661,331,656,000
1,661,340,797,000
1,661,340,796,000
MEMBER
null
Fix documentation card of math_qa dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4884/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4884", "html_url": "https://github.com/huggingface/datasets/pull/4884", "diff_url": "https://github.com/huggingface/datasets/pull/4884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4884.patch", "merged_at": 1661340796000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4883/comments
https://api.github.com/repos/huggingface/datasets/issues/4883/events
https://github.com/huggingface/datasets/issues/4883
1,349,083,235
I_kwDODunzps5QaWBj
4,883
With dataloader RSS memory consumed by HF datasets monotonically increases
{ "login": "apsdehal", "id": 3616806, "node_id": "MDQ6VXNlcjM2MTY4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apsdehal", "html_url": "https://github.com/apsdehal", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "repos_url": "https://api.github.com/users/apsdehal/repos", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,661,330,574,000
1,661,371,789,000
null
MEMBER
null
## Describe the bug When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant. ## Steps to reproduce the bug Run and observe the output of this snippet which logs RSS memory. ```python import psutil import os from transformers import BertTokenizer from datasets import load_dataset from torch.utils.data import DataLoader BATCH_SIZE = 32 NUM_TRIES = 10 tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") def transform(x): x.update(tokenizer(x["text"], return_tensors="pt", max_length=64, padding="max_length", truncation=True)) x.pop("text") x.pop("label") return x dataset = load_dataset("imdb", split="train") dataset.set_transform(transform) train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) count = 0 while count < NUM_TRIES: for idx, batch in enumerate(train_loader): mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) print(count, idx, mem_after - mem_before) count += 1 ``` ## Expected results Memory should not increase after initial setup and loading of the dataset ## Actual results Memory continuously increases as can be seen in the log. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4883/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/datasets/issues/4883/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4882/comments
https://api.github.com/repos/huggingface/datasets/issues/4882/events
https://github.com/huggingface/datasets/pull/4882
1,348,913,665
PR_kwDODunzps49sRtv
4,882
Fix language tags resource file
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4882). All of your documentation changes will be reflected on that endpoint." ]
1,661,321,161,000
1,661,349,513,000
1,661,349,510,000
MEMBER
null
This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08). This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See: - #4753
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4882/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4882", "html_url": "https://github.com/huggingface/datasets/pull/4882", "diff_url": "https://github.com/huggingface/datasets/pull/4882.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4882.patch", "merged_at": 1661349510000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4881/comments
https://api.github.com/repos/huggingface/datasets/issues/4881/events
https://github.com/huggingface/datasets/issues/4881
1,348,495,777
I_kwDODunzps5QYGmh
4,881
Language names and language codes: connecting to a big database (rather than slow enrichment of custom list)
{ "login": "alexis-michaud", "id": 6072524, "node_id": "MDQ6VXNlcjYwNzI1MjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6072524?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexis-michaud", "html_url": "https://github.com/alexis-michaud", "followers_url": "https://api.github.com/users/alexis-michaud/followers", "following_url": "https://api.github.com/users/alexis-michaud/following{/other_user}", "gists_url": "https://api.github.com/users/alexis-michaud/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexis-michaud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexis-michaud/subscriptions", "organizations_url": "https://api.github.com/users/alexis-michaud/orgs", "repos_url": "https://api.github.com/users/alexis-michaud/repos", "events_url": "https://api.github.com/users/alexis-michaud/events{/privacy}", "received_events_url": "https://api.github.com/users/alexis-michaud/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface/moon-landing ", "on the Hub side, there is not fine grained validation we just check that `language:` contains an array of lowercase strings between 2 and 3 characters long =)\r\n\r\nand for `language_bcp47:` we just check it's an array of strings.\r\n\r\nThe only page where we have a hardcoded list of languages is https://huggingface.co/languages and I've been thinking of hooking that page on an external database of languages (so any suggestion is super interesting), but it's not used for validation.\r\n\r\nThat being said, in `datasets` this file https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json is not really used no? Or just in the tagging tool? What about just removing it?\r\n\r\nalso cc'ing @lbourdois who's been active and helpful on those subjects in the past!", "PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n\r\ncc @albertvillanova too", "> PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n> \r\n> cc @albertvillanova too\r\n\r\nMany thanks for your answer! \r\n\r\nThe Glottolog database is kept up to date, and has information on the closest ISO code for each Glottocode. So providing a clean table with equivalences sounds (to me) like something perfectly reasonable to expect from their team. \r\nTo what extent would [pyglottolog](https://github.com/glottolog/pyglottolog) fit the bill / do the job? (API documentation [here](https://pyglottolog.readthedocs.io/en/latest/index.html)) I'm reaching my technical limitations here: I can't assess the distance between what they offer and what the HF team needs. \r\nI have opened an Issue in [their repo](https://github.com/glottolog/glottolog/issues/877). \r\n\r\nVery interested to see where it goes from there.", "I just tried pyglottolog to generate a file with all the current IDs (first column).\r\n\r\n`glottolog languoids` inside the `glottolog` repository.\r\n\r\n[glottolog-languoids-v4.6-10-g5c66eec874.csv](https://github.com/huggingface/datasets/files/9417456/glottolog-languoids-v4.6-10-g5c66eec874.csv)\r\n\r\n", "Greetings @alexis-michaud and others,\r\nI think perhaps a standards-based approach here would help everyone out both at the technical and social layers of technical innovations. \r\n\r\nLet me say a few things: \r\n1. there are multiple kinds of assets in AI that should have associated language codes. \r\n * AI Training Data sets\r\n * AI models\r\n * AI outputs\r\nThese are all distinct components which should be tagged for the language and encoding methods they operate on or enhance. For example, an AI based cross-language tool from French to English (UK variety) still needs to consider if it is operating on oral language speech or written text. This is where [IANA language sub-tags](https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry) come in and are so important. I link to the official source. If one wants to use middleware such as a python package or npm package to manage strings then please make sure those packages are updating codes as they are being revised. I see that @julien-c mentioned BCP-47. BCP-47 is the current standard for language tagging. Following it will make the resources you create more findable and let future users better understand or expect any biases which may have been introduced in the different AI based products.\r\n2. BCP-47 is a technical read. However, you will notice that it identifies when to use an ISO 639-1, ISO 639-2, or ISO 639-3. code. This is important for interoperability with many systems. If you are using library systems then you should likely just stick with ISO 639-3 codes.\r\n3. If you are going to use Glottolog codes use them after an `-x-` tag in the BCP-47 format to maintain BCP-47 validity. \r\n4. You should source ISO 639-3 codes directly from the [ISO 639-3 registrar](https://iso639-3.sil.org/code_tables/639/data) as these codes are updated annually, usually in February or March. ISO 639-3 codes have multiple classes: `Active`, `Deprecated`, and `Unassigned`. This means that string length checking is not a sufficient strategy for validation.\r\n5. The names of smaller languages often change depending on the language used to describe them. The [ISO639-2 documentation](https://www.loc.gov/standards/iso639-2/php/code_list.php) has a list of language names for languages with smaller populations for languages in which descriptions about these languages are often written. For example, ISO 639-2's documentation contains the names of languages as they are used in French, German, and English. ISO 639-2 rarely is updated as it is now tied to ISO 639-3's evolution and modern systems should just use ISO 639-3, but these additional names of languages in other languages may not appear in the ISO 369-3 tables.\r\n6. Glottolog codes are also updated at least annually. Usually sometime after ISO 639-3 updates.\r\n7. Please, if the material is in a written mode, please indicate which script is used unless the IANA field has a `suppress script` value. Please use the script tag that BCP-47 calls for from [ISO 15924](https://unicode.org/iso15924/iso15924-codes.html). This also updates at least annually. \r\n8. Another great place to look for language names is the [Unicode CLDR database for locales](https://cldr.unicode.org/translation/displaynames/languagelocale-names). These ought to be congruent with ISO 639-3 but, sometimes CLDR has additional references to languages (such as the french name for a language) which is not contained in ISO 639-2 or ISO 639-3.\r\n9. Wikidata for language names is not always a great source of authoritative information. Language names are asymmetrical. Many times they are contrived because there is no actual name for the language in the language referring... e.g. French doesn't have a name for every language in the world, often they say something like: the language of 'x' people. — English does the same. When a language name standard does not have the best name for a language the best way to handle that is to make a change request with the standards registrar. Keeping track of the source list and the version of your source list for your language codes is very important. \r\n10. Finally, It would be a great service to technologist, minority language communities, and linguists if for all resources of the three types mentioned in number 1 above you added a record to [OLAC](http://www.language-archives.org/). — I can help you with that. OLAC is a search interface for language resources.\r\n", "Hi everybody!\r\n\r\nAbout the point:\r\n> also cc'ing @lbourdois who's been active and helpful on those subjects in the past!\r\n\r\nDiscussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: https://github.com/huggingface/hub-docs/issues/193\r\nOnce this system has been redone and satisfies the identified needs, a redesign of the [Languages page](https://huggingface.co/languages) would also be relevant: https://github.com/huggingface/hub-docs/issues/194. \r\nI invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\n\r\nTo return to the present discussion, thank you for the various databases and methodologies you mention. It makes a big difference to have linguists in the loop 🚀.\r\n\r\nI have a couple of questions where I think an expert perspective would be appreciated:\r\n- Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\nFor example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\n- When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n(@julien-c if we could have an interactive map of languages on the [Languages page](https://huggingface.co/languages) like this [one](https://www.endangeredlanguages.com/#/3/35.540/26.548/0/100000/0/low/mid/high/dormant/awakening/unknown) it would be fire 🔥)\r\n\r\n- On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone \r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\nBased on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n- Are there any databases that take into account all the existing sign languages in the world?\r\nIt would be nice to have them included on the Hub.\r\n\r\n- Is there an international classification of languages?\r\nA bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later. \r\n\r\n- Finally, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? 👀 And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).", "> I invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\nOne comment on this fall back system (which generally follows the BCP-47 process). ISO 639-2 has some codes which refer to a language ambiguously. For example, I believe code `ara` is used for arabic. In some contexts arabic is considered a single language, however, Egyptian Arabic is quite different from Moroccan Arabic, which are both considered separate languages. These ambiguous codes are valid ISO 639-3 codes but they have a special status. They are called `macro codes`. They exist inside the ISO 639-3 standard to provide absolute fallback compatibility between ISO 639-2 and ISO 639-3. However, when considering AI and MT applications with language data, the unforeseen potential applications and the potential for bias using macro codes should be avoided for new applications of language tags to resources. For historical cases where it is not clear what resources were used to create the AI tools or datasets then I understand the use of ambiguous tag uses. So for clarity in language tagging I suggest:\r\n\r\n1. Strictly following BCP-47\r\n2. Whenever possible avoid the use of macro tags in the ISO 639-3 standard. These are BCP-47 valid, but could introduce biases in the application of their use in society. (Generally there are more specific tags available to use in the ISO 639-3 standard.)", "> * Are there any databases that take into account all the existing sign languages in the world?\r\n> It would be nice to have them included on the Hub.\r\n\r\nSign Languages present an interesting case. As I understand the situation. The identification of sign languages has been identified as a component of their endangerment. Some sign languages do exist in ISO 639-3. For further discussion on the issue I refer readers to the following publications: \r\n\r\n* https://doi.org/10.3390/languages7010049\r\n* https://www.academia.edu/35870983/The_ethics_of_of_language_identification_and_ISO_639\r\n\r\nOne way to be BCP-47 compliant and identify a sign language which is not identified in any of the BCP-47 referenced standards is to use the ISO 639-3 code for undetermined language `und` and then apply a custom suffix indicator (as explained in BCP-47) `-x-` and a custom code, such as the ones used in https://doi.org/10.3390/languages7010049", "> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nYes that would be the function of ISO 639-3. It is the reference standard for languages. It includes a code and its name and the status of the code. Many technical metadata standards for file and computer interoperability reference it, many technical library metadata standards reference it. Some linguists use it. Many governments reference it. \r\n\r\nIndexing diseases are different from indexing languages in several ways, one way is that diseases are the impact of a pathogen not the pathogen itself. If we take COVID-19 as an example, there are many varieties of the pathogen but broadly speaking there is only one disease — with many symptoms.\r\n\r\n", ">* When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nWhile these do appear on wikipedia, I don't know of any information system which uses these codes. I do know that glottolog did import ELP data at one time and its database does contain ELP data I'm not sure if Glottolog regularly ingests new versions of ELP data. I suspect that the use of Linguasphere data may be relevant to users of wikidata as a linked data attribute but I haven't heard of any linked data projects using Linguasphere data for analysis or product development. My impression is that it is fairly unused.", "> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n>For example (I'm taking the case of Hebrew but this has happened for other languages) I [tag](https://huggingface.co/models?language=iw&sort=downloads)ged Google models with the \"iw\" tag because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\nYes. You can parse the IANA file linked to above (it is regularly updated). All deprecated tags are marked as such in that file. The new prefered tag if there is one, is indicated. ISO 639-3 also indicates a code's status but their list is relevant only codes within their domain (ISO 639-3).", "> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n\r\nI would interpret `en-fr` as english as spoken in France. `fr`in this position refers to the geo-political entity not a second language. I see no reason that other linguists should have a different option after having read BCP-47 and understood how it works.\r\n\r\nThe functional goal here is to tag a language resource as being produced by nonnative speakers, while tagging both languages. There are several problems here. The first is that BCP-47 has no way explicit way to do this. One could use the sub code `x-` with a private use code to indicate a second language and infer some meaning as to that language's role. However, there is another problem here which complexifies the situation greatly... how do we know that those english speakers (in France, or from France, or who were native French speakers) were not speaking their third or fourth language rather than their second language. So to conceptualize a sub-tag which indicates the first language of a speech act for speakers in a second (or other) language would need to be carefully crafted. It might then be proposed to the appropriate authorities. For example three sub-tags exist.\r\n\r\nThere are three registered sub-tags out of a BCP-47 allowed 35. These are `x-`, `u-`, and `t-`. `u-` and `t-` are defined in [RFC6067 ](https://www.rfc-editor.org/rfc/rfc6067)and [RFC6497](https://www.rfc-editor.org/rfc/rfc6497) . For more information see the [Unicode CLDR documentation](https://cldr.unicode.org/index/bcp47-extension) where it says: \r\n\r\n\r\n>[IETF BCP 47 ](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t)[Tags for Identifying Languages](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t) defines the language identifiers (tags) used on the Internet and in many standards. It has an extension mechanism that allows additional information to be included. The Unicode Consortium is the maintainer of the extension ‘u’ for Locale Extensions, as described in [rfc6067](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6067&sa=D&sntz=1&usg=AOvVaw0gGWi0EjHfy1WId8k8oKAi), and the extension 't' for Transformed Content, as described in [rfc6497](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6497&sa=D&sntz=1&usg=AOvVaw0w-OUsFX1PtaKYIq31P64I).\r\n>\r\n>The subtags available for use in the 'u' extension provide language tag extensions that provide for additional information needed for identifying locales. The 'u' subtags consist of a set of keys and associated values (types). For example, a locale identifier for British English with numeric collation has the following form: en-GB-u-kn-true\r\n>\r\n>The subtags available for use in the 't' extension provide language tag extensions that provide for additional information needed for identifying transformed content, or a request to transform content in a certain way. For example, the language tag \"ja-Kana-t-it\" can be used as a content tag indicates Japanese Katakana transformed from Italian. It can also be used as a request for a given transformation.\r\n>\r\n>For more details on the valid subtags for these extensions, their syntax, and their meanings, see LDML Section 3.7 [Unicode BCP 47 Extension Data](http://www.google.com/url?q=http%3A%2F%2Fwww.unicode.org%2Freports%2Ftr35%2F%23Locale_Extension_Key_and_Type_Data&sa=D&sntz=1&usg=AOvVaw0lMthb9KbTJtoOd5mvv3Ha)." ]
1,661,285,664,000
1,661,630,460,000
null
NONE
null
**The problem:** Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial. Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.) Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time? Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues: * progress is likely to be slow: ![image](https://user-images.githubusercontent.com/6072524/186253353-62f42168-3d31-4105-be1c-5eb1f818d528.png) (input required from reviewers, etc.) * the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate. * there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives. **A solution that seems desirable:** Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc. It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes. Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out). In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen. With appreciation of HFT,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4881/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4880/comments
https://api.github.com/repos/huggingface/datasets/issues/4880/events
https://github.com/huggingface/datasets/pull/4880
1,348,452,776
PR_kwDODunzps49qyJr
4,880
Added names of less-studied languages
{ "login": "BenjaminGalliot", "id": 23100612, "node_id": "MDQ6VXNlcjIzMTAwNjEy", "avatar_url": "https://avatars.githubusercontent.com/u/23100612?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BenjaminGalliot", "html_url": "https://github.com/BenjaminGalliot", "followers_url": "https://api.github.com/users/BenjaminGalliot/followers", "following_url": "https://api.github.com/users/BenjaminGalliot/following{/other_user}", "gists_url": "https://api.github.com/users/BenjaminGalliot/gists{/gist_id}", "starred_url": "https://api.github.com/users/BenjaminGalliot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BenjaminGalliot/subscriptions", "organizations_url": "https://api.github.com/users/BenjaminGalliot/orgs", "repos_url": "https://api.github.com/users/BenjaminGalliot/repos", "events_url": "https://api.github.com/users/BenjaminGalliot/events{/privacy}", "received_events_url": "https://api.github.com/users/BenjaminGalliot/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "OK, I removed Glottolog codes and only added ISO 639-3 ones. The former are for the moment in corpus card description, language details, and in subcorpora names.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4880). All of your documentation changes will be reflected on that endpoint." ]
1,661,283,158,000
1,661,345,566,000
1,661,345,566,000
CONTRIBUTOR
null
Added names of less-studied languages (nru – Narua and jya – Japhug) for existing datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4880/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4880", "html_url": "https://github.com/huggingface/datasets/pull/4880", "diff_url": "https://github.com/huggingface/datasets/pull/4880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4880.patch", "merged_at": 1661345566000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4879/comments
https://api.github.com/repos/huggingface/datasets/issues/4879/events
https://github.com/huggingface/datasets/pull/4879
1,348,346,407
PR_kwDODunzps49qbOl
4,879
Fix Citation Information section in dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4879). All of your documentation changes will be reflected on that endpoint." ]
1,661,278,003,000
1,661,314,148,000
1,661,314,147,000
MEMBER
null
Fix Citation Information section in dataset cards. This PR partially fixes the Citation Information section in dataset cards. Subsequent PRs will follow to complete this task.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4879/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4879", "html_url": "https://github.com/huggingface/datasets/pull/4879", "diff_url": "https://github.com/huggingface/datasets/pull/4879.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4879.patch", "merged_at": 1661314147000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4878/comments
https://api.github.com/repos/huggingface/datasets/issues/4878/events
https://github.com/huggingface/datasets/issues/4878
1,348,270,141
I_kwDODunzps5QXPg9
4,878
[not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file`
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892884, "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted", "name": "help wanted", "color": "008672", "default": true, "description": "Extra attention is needed" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[]
1,661,274,595,000
1,661,274,609,000
null
CONTRIBUTOR
null
In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon) See https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169 It's used here: https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381 https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362 https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213 We should remove it. Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4878/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4877/comments
https://api.github.com/repos/huggingface/datasets/issues/4877/events
https://github.com/huggingface/datasets/pull/4877
1,348,246,755
PR_kwDODunzps49qF-w
4,877
Fix documentation card of covid_qa_castorini dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4877). All of your documentation changes will be reflected on that endpoint." ]
1,661,273,553,000
1,661,277,901,000
1,661,277,900,000
MEMBER
null
Fix documentation card of covid_qa_castorini dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4877/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4877", "html_url": "https://github.com/huggingface/datasets/pull/4877", "diff_url": "https://github.com/huggingface/datasets/pull/4877.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4877.patch", "merged_at": 1661277900000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4876/comments
https://api.github.com/repos/huggingface/datasets/issues/4876/events
https://github.com/huggingface/datasets/issues/4876
1,348,202,678
I_kwDODunzps5QW_C2
4,876
Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "also @osanseviero @Pierrci @SBrandeis potentially", "Love this in principle 🚀 \r\n\r\nLet's keep in mind users might rely on `dataset_infos.json` already.\r\n\r\nI'm not convinced by the two-syntax solution, wouldn't it be simpler to have only one syntax with a `default` config for datasets with only one config? ie, always having the `configs` field. This makes parsing the metadata easier IMO.\r\n\r\nMight also be good to wrap the tags under a `datasets_info` tag as follows:\r\n\r\n```yaml\r\ndescription: ...\r\ncitation: ...\r\ndataset_infos:\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n configs:\r\n - ...\r\n[...]\r\n```\r\n\r\nLet's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.", "> Let's keep in mind users might rely on dataset_infos.json already.\r\n\r\nYea we'll full full backward compatibility\r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\nThe main things that may use or ingest these data IMO are:\r\n- users in the UI or IDE\r\n- `datasets` to populate `DatasetInfo` python object\r\n- moon landing which is already parsing YAML\r\n\r\nAm I missing something ? If not I think it's ok to use YAML\r\n\r\n> Might also be good to wrap the tags under a datasets_info tag as follows:\r\n\r\nMaybe one single syntax like this then ?\r\n```yaml\r\ndataset_infos:\r\n- config: unlabeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n- config: labeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 100\r\n features:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: ClassLabel\r\n names:\r\n - negative\r\n - positive\r\n```\r\nand when you have only one config\r\n```yaml\r\ndataset_infos:\r\n- config: default\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n```", "love the idea, and the trend in general to move more things (like tasks) to a single place (YAML).\r\n\r\nalso, if you browse files on a dataset's page (in \"Files and versions\"), raw `README.md` files looks nice and readable, while `.json` files are just one long line that users need to scroll. \r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\ndo users often parse `datasets_infos.json` file themselves? ", "> do users often parse datasets_infos.json file themselves?\r\n\r\nNot AFAIK, but I'm sure there should be a few users.\r\nUsers that access these info via the `DatasetInfo` from `datasets` won't see the change though e.g.\r\n```python\r\n>> from datasets import get_datasets_infos\r\n>>> get_datasets_infos(\"squad\")\r\n{'plain_text': DatasetInfo(description='Stanford Question Answering Dataset...\r\n```", "> Maybe one single syntax like this then ?\r\n\r\nLGTM!\r\n\r\n> The main things that may use or ingest these data IMO are:\r\n> - users in the UI or IDE\r\n> - datasets to populate DatasetInfo python object\r\n> - moon landing which is already parsing YAML\r\n\r\nFair point!\r\n\r\nHaving dataset info in the README's YAML is great for API / `huggingface_hub` consumers as well as it will be inserted in the `cardData` field out of the box 🔥 \r\n", "Very supportive of this!\r\n\r\nNesting an array of configs inside `dataset_infos: ` sounds good to me. One small tweak is that `config: default` can be optional for the default config (which can be the first one by convention)\r\n\r\nWe'll be able to implement metadata validation on the Hub side so we ensure that those metadata are always in the right format (maybe for @coyotte508 ? cc @Pierrci). From a quick glance the `features` might be the harder part to validate here, any doc will be welcome.\r\n\r\n### Other high-level points:\r\n- as we move from mostly academic datasets to *all* datasets (which include the data inside the repos), my intuition is that more and more datasets (Hub-stored) are going to be **single-config**\r\n- similarly, less and less datasets will have a loading script, **just the data + some metadata**\r\n- to lower the barrier to entry to contribution, in the long term users shouldn't need to compute/update this data via a command line. It could be filled automatically on the Hub through a \"bot\" inside Discussions & Pull requests for instance.", "re: `config: default`\r\n\r\nNote also that the default config is not named `default`, afaiu, but create from the repo name, eg: https://huggingface.co/datasets/nbtpj/bionlp2021SAS default config is `nbtpj--bionlp2021SAS` (which is awful)", "> Note also that the default config is not named default, afaiu, but create from the repo name, eg: https://huggingface.co/datasets/nbtpj/bionlp2021SAS default config is nbtpj--bionlp2021SAS (which is awful)\r\n\r\nWe can change this to `default` I think or something else", "> From a quick glance the features might be the harder part to validate here, any doc will be welcome.\r\n\r\nI dug into features validation, see:\r\n\r\n- the OpenAPI spec: https://github.com/huggingface/datasets-server/blob/main/chart/static-files/openapi.json#L460-L697\r\n- the node.js code: https://github.com/huggingface/moon-landing/blob/upgrade-datasets-server-client/server/lib/datasets/FeatureType.ts", "> We can change this to default I think or something else\r\n\r\nI created https://github.com/huggingface/datasets/issues/4902 to discuss that", "> Note also that the default config is not named `default`, afaiu, but create from the repo name\r\n\r\nin case of single-config you can even hide the config name from the UI IMO\r\n\r\n> I dug into features validation, see: the OpenAPI spec\r\n\r\nin moon-landing we use [Joi](https://joi.dev/api/) to validate metadata so we would need to generate from Joi code from the OpenAPI spec (or from somewhere else) but I guess that's doable – or just rewrite it manually, as it won't change often", "I remember there was an ongoing discussion on this topic:\r\n- #3507\r\n\r\nI recall some of the concerns raised on that discussion:\r\n- @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1056997627)\r\n- @severo: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1042779776)\r\n - the metadata header might be very long, before reaching the start of the README/dataset card. \r\n - It also somewhat prevents including large strings like the checksums\r\n - two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file. \r\n- @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157)" ]
1,661,271,401,000
1,661,578,548,000
null
MEMBER
null
Currently there are two places to find metadata for datasets: - datasets_infos.json, which contains **per dataset config** - description - citation - license - splits and sizes - checksums of the data files - feature types - and more - YAML tags, which contain - license - language - train-eval-index - and more It would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have. One way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant. Here is an example for SQuAD ```yaml download_size: 35142551 dataset_size: 89789763 version: 1.0.0 splits: - name: train num_examples: 87599 num_bytes: 79317110 - name: validation num_examples: 10570 num_bytes: 10472653 features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers struct: - name: text list: dtype: string - name: answer_start list: dtype: int32 ``` Since there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax ```yaml configs: - config: unlabeled splits: - name: train num_examples: 10000 features: - name: text dtype: string - config: labeled splits: - name: train num_examples: 100 features: - name: text dtype: string - name: label dtype: ClassLabel names: - negative - positive ``` So in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field Alternatively we could keep config specific stuff in the `dataset_infos.json` as it it today Not sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4876/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 4 }
https://api.github.com/repos/huggingface/datasets/issues/4876/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4875/comments
https://api.github.com/repos/huggingface/datasets/issues/4875/events
https://github.com/huggingface/datasets/issues/4875
1,348,095,686
I_kwDODunzps5QWk7G
4,875
`_resolve_features` ignores the token
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Your HF_ENDPOINT seems wrong because of the extra \"/\"\r\n```diff\r\n- os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co/\"\r\n+ os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co\"\r\n```\r\n\r\ncan you try again without the extra \"/\" ?", "Oh, yes, sorry, but it's not the issue.\r\n\r\nIn my code, I set `HF_ENDPOINT=https://hub-ci.huggingface.co`. I added `os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co/\"` afterward just to indicate that we had to have this env var and made a mistake there", "I can't reproduce on my side. I tried using a private dataset repo with a CSV file on hub-ci\r\n\r\nWhat's your version of `huggingface_hub` ?", "I can't reproduce either... Not sure what has occurred, very sorry to have made you lost your time on that " ]
1,661,266,656,000
1,661,358,821,000
1,661,358,810,000
CONTRIBUTOR
null
## Describe the bug When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `load_dataset` before. ## Steps to reproduce the bug ```python import os os.environ["HF_ENDPOINT"] = "https://hub-ci.huggingface.co/" hf_token = "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" from datasets import load_dataset # public dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654226756" config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654226756" split_name = "train" iterable_dataset = load_dataset( dataset_name, name=config_name, split=split_name, streaming=True, use_auth_token=hf_token, ) iterable_dataset = iterable_dataset._resolve_features() print(iterable_dataset.features) # gated dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654317644" config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654317644" split_name = "train" iterable_dataset = load_dataset( dataset_name, name=config_name, split=split_name, streaming=True, use_auth_token=hf_token, ) try: iterable_dataset = iterable_dataset._resolve_features() except FileNotFoundError as e: print("FAILS") ``` ## Expected results I expect to have the same result on a public dataset and on a gated (or private) dataset, if the token has been provided. ## Actual results An exception is thrown on gated datasets. ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.0-1017-aws-x86_64-with-glibc2.35 - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4875/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4874/comments
https://api.github.com/repos/huggingface/datasets/issues/4874/events
https://github.com/huggingface/datasets/pull/4874
1,347,618,197
PR_kwDODunzps49n_nI
4,874
[docs] Some tiny doc tweaks
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4874). All of your documentation changes will be reflected on that endpoint." ]
1,661,246,380,000
1,661,362,077,000
1,661,362,076,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4874/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4874", "html_url": "https://github.com/huggingface/datasets/pull/4874", "diff_url": "https://github.com/huggingface/datasets/pull/4874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4874.patch", "merged_at": 1661362076000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4873/comments
https://api.github.com/repos/huggingface/datasets/issues/4873/events
https://github.com/huggingface/datasets/issues/4873
1,347,592,022
I_kwDODunzps5QUp9W
4,873
Multiple dataloader memory error
{ "login": "cyk1337", "id": 13767887, "node_id": "MDQ6VXNlcjEzNzY3ODg3", "avatar_url": "https://avatars.githubusercontent.com/u/13767887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyk1337", "html_url": "https://github.com/cyk1337", "followers_url": "https://api.github.com/users/cyk1337/followers", "following_url": "https://api.github.com/users/cyk1337/following{/other_user}", "gists_url": "https://api.github.com/users/cyk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyk1337/subscriptions", "organizations_url": "https://api.github.com/users/cyk1337/orgs", "repos_url": "https://api.github.com/users/cyk1337/repos", "events_url": "https://api.github.com/users/cyk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/cyk1337/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,661,245,190,000
1,661,245,190,000
null
NONE
null
For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)` It causes the memory error when generating batches. Any solutions to it? ```bash File "/home/xxx/my_code/src/utils/data_utils.py", line 54, in generate_batch x = next(iterator) File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 301, in __iter__ for batch in super().__iter__(): File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch data.append(next(self.dataset_iter)) File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 249, in __iter__ for element in self.dataset: File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 503, in __iter__ for key, example in self._iter(): File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 500, in _iter yield from ex_iterable File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 231, in __iter__ new_key = "_".join(str(key) for key in keys) MemoryError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4873/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4872/comments
https://api.github.com/repos/huggingface/datasets/issues/4872/events
https://github.com/huggingface/datasets/pull/4872
1,347,180,765
PR_kwDODunzps49mjU9
4,872
[WIP] Docs for creating an audio dataset
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4872). All of your documentation changes will be reflected on that endpoint.", "Awesome thanks ! I think we can also encourage TAR archives as for image dataset scripts (feel free to copy paste some parts from there lol)" ]
1,661,216,829,000
1,661,614,742,000
null
MEMBER
null
This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. 🙂
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4872/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4872", "html_url": "https://github.com/huggingface/datasets/pull/4872", "diff_url": "https://github.com/huggingface/datasets/pull/4872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4872.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4871/comments
https://api.github.com/repos/huggingface/datasets/issues/4871/events
https://github.com/huggingface/datasets/pull/4871
1,346,703,568
PR_kwDODunzps49k9Rm
4,871
Fix: wmt datasets - fix CWMT zh subsets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4871). All of your documentation changes will be reflected on that endpoint." ]
1,661,186,529,000
1,661,248,820,000
1,661,248,819,000
MEMBER
null
Fix https://github.com/huggingface/datasets/issues/4575 TODO: run `datasets-cli test`: - [x] wmt17 - [x] wmt18 - [x] wmt19
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4871/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4871", "html_url": "https://github.com/huggingface/datasets/pull/4871", "diff_url": "https://github.com/huggingface/datasets/pull/4871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4871.patch", "merged_at": 1661248819000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4870/comments
https://api.github.com/repos/huggingface/datasets/issues/4870/events
https://github.com/huggingface/datasets/pull/4870
1,346,160,498
PR_kwDODunzps49jGxD
4,870
audio folder check CI
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661,163,353,000
1,661,171,672,000
1,661,170,780,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4870/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4870", "html_url": "https://github.com/huggingface/datasets/pull/4870", "diff_url": "https://github.com/huggingface/datasets/pull/4870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4870.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4869/comments
https://api.github.com/repos/huggingface/datasets/issues/4869/events
https://github.com/huggingface/datasets/pull/4869
1,345,513,758
PR_kwDODunzps49hBGY
4,869
Fix typos in documentation
{ "login": "Flozii2", "id": 85993954, "node_id": "MDQ6VXNlcjg1OTkzOTU0", "avatar_url": "https://avatars.githubusercontent.com/u/85993954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Flozii2", "html_url": "https://github.com/Flozii2", "followers_url": "https://api.github.com/users/Flozii2/followers", "following_url": "https://api.github.com/users/Flozii2/following{/other_user}", "gists_url": "https://api.github.com/users/Flozii2/gists{/gist_id}", "starred_url": "https://api.github.com/users/Flozii2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Flozii2/subscriptions", "organizations_url": "https://api.github.com/users/Flozii2/orgs", "repos_url": "https://api.github.com/users/Flozii2/repos", "events_url": "https://api.github.com/users/Flozii2/events{/privacy}", "received_events_url": "https://api.github.com/users/Flozii2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,661,094,603,000
1,661,160,339,000
1,661,159,398,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4869/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4869", "html_url": "https://github.com/huggingface/datasets/pull/4869", "diff_url": "https://github.com/huggingface/datasets/pull/4869.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4869.patch", "merged_at": 1661159398000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4868/comments
https://api.github.com/repos/huggingface/datasets/issues/4868/events
https://github.com/huggingface/datasets/pull/4868
1,345,191,322
PR_kwDODunzps49gBk0
4,868
adding mafand to datasets
{ "login": "dadelani", "id": 23586676, "node_id": "MDQ6VXNlcjIzNTg2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dadelani", "html_url": "https://github.com/dadelani", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "organizations_url": "https://api.github.com/users/dadelani/orgs", "repos_url": "https://api.github.com/users/dadelani/repos", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "received_events_url": "https://api.github.com/users/dadelani/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @dadelani, thanks for your awesome contribution!!! :heart: \r\n\r\nHowever, now we are using the Hub to add new datasets, instead of this GitHub repo. \r\n\r\nYou could share this dataset under your Hub organization namespace: [Masakhane NLP](https://huggingface.co/masakhane). This way the dataset will be accessible using:\r\n```python\r\nds = load_dataset(\"masakhane/mafand\")\r\n```\r\n\r\nYou have the procedure documented in our online docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nMoreover, datasets shared on the Hub no longer need the dummy data files.\r\n\r\nPlease, feel free to ping me if you need any further guidance/support.", "thank you for the comment. I have moved it to the Hub https://huggingface.co/datasets/masakhane/mafand", "Great job, @dadelani!!\r\n\r\nPlease, note that in the README.md file, the YAML tags should be preceded and followed by three dashes `---`, so that they are properly parsed. See, e.g.: https://raw.githubusercontent.com/huggingface/datasets/main/templates/README.md", "Also you could replace the line:\r\n```\r\n# Dataset Card for [Needs More Information]\r\n```\r\nwith\r\n```\r\n# Dataset Card for MAFAND-MT\r\n```", "Great, thank you for the feedback. I have fixed both issues." ]
1,661,009,174,000
1,661,166,050,000
1,661,158,343,000
CONTRIBUTOR
null
I'm addding the MAFAND dataset by Masakhane based on the paper/repository below: Paper: https://aclanthology.org/2022.naacl-main.223/ Code: https://github.com/masakhane-io/lafand-mt Please, help merge this Everything works except for creating dummy data file
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4868/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4868", "html_url": "https://github.com/huggingface/datasets/pull/4868", "diff_url": "https://github.com/huggingface/datasets/pull/4868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4868.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4867/comments
https://api.github.com/repos/huggingface/datasets/issues/4867/events
https://github.com/huggingface/datasets/pull/4867
1,344,982,646
PR_kwDODunzps49fZle
4,867
Complete tags of superglue dataset card
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "repos_url": "https://api.github.com/users/richarddwang/repos", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,660,952,679,000
1,661,159,643,000
1,661,158,711,000
CONTRIBUTOR
null
Related to #4479 .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4867/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4867", "html_url": "https://github.com/huggingface/datasets/pull/4867", "diff_url": "https://github.com/huggingface/datasets/pull/4867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4867.patch", "merged_at": 1661158711000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4866/comments
https://api.github.com/repos/huggingface/datasets/issues/4866/events
https://github.com/huggingface/datasets/pull/4866
1,344,809,132
PR_kwDODunzps49e1CP
4,866
amend docstring for dunder
{ "login": "schafsam", "id": 37704298, "node_id": "MDQ6VXNlcjM3NzA0Mjk4", "avatar_url": "https://avatars.githubusercontent.com/u/37704298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/schafsam", "html_url": "https://github.com/schafsam", "followers_url": "https://api.github.com/users/schafsam/followers", "following_url": "https://api.github.com/users/schafsam/following{/other_user}", "gists_url": "https://api.github.com/users/schafsam/gists{/gist_id}", "starred_url": "https://api.github.com/users/schafsam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schafsam/subscriptions", "organizations_url": "https://api.github.com/users/schafsam/orgs", "repos_url": "https://api.github.com/users/schafsam/repos", "events_url": "https://api.github.com/users/schafsam/events{/privacy}", "received_events_url": "https://api.github.com/users/schafsam/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4866). All of your documentation changes will be reflected on that endpoint." ]
1,660,936,155,000
1,661,160,474,000
null
NONE
null
display dunder method in docsting with underlines an not bold markdown.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4866/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4866", "html_url": "https://github.com/huggingface/datasets/pull/4866", "diff_url": "https://github.com/huggingface/datasets/pull/4866.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4866.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4865/comments
https://api.github.com/repos/huggingface/datasets/issues/4865/events
https://github.com/huggingface/datasets/issues/4865
1,344,552,626
I_kwDODunzps5QJD6y
4,865
Dataset Viewer issue for MoritzLaurer/multilingual_nli
{ "login": "MoritzLaurer", "id": 41862082, "node_id": "MDQ6VXNlcjQxODYyMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MoritzLaurer", "html_url": "https://github.com/MoritzLaurer", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting @MoritzLaurer.\r\n\r\nCurrently, the dataset preview is working properly: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli\r\n\r\nPlease note that when a dataset is modified, it might take some time until the preview is completely updated.\r\n\r\n@severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?", "Thanks for your response. You are right, its now working well. I had waited for 30 min or so and refreshed several times and thought there was some other error. Yeah, a different error message sounds like a good idea to avoid confusion. ", "I'm closing this issue then.", "> @severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?\r\n\r\nYes, it's a known issue, and we're about to ship a better version" ]
1,660,920,920,000
1,661,179,634,000
1,661,148,800,000
NONE
null
### Link _No response_ ### Description I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli It displays the error: ``` Status code: 400 Exception: Status400Error Message: The dataset does not exist. ``` Weirdly enough the dataviewer works for an earlier version of the same dataset. The only difference is that it is smaller, but I'm not aware of other changes I have made: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli_test Do you know why the dataviewer is not working? ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4865/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4864/comments
https://api.github.com/repos/huggingface/datasets/issues/4864/events
https://github.com/huggingface/datasets/issues/4864
1,344,410,043
I_kwDODunzps5QIhG7
4,864
Allow pathlib PoxisPath in Dataset.read_json
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,660,913,957,000
1,660,913,957,000
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** ``` from pathlib import Path from datasets import Dataset ds = Dataset.read_json(Path('data.json')) ``` causes an error ``` AttributeError: 'PosixPath' object has no attribute 'decode' ``` **Describe the solution you'd like** It should be able to accept PosixPath and read the json from inside.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4864/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4863/comments
https://api.github.com/repos/huggingface/datasets/issues/4863/events
https://github.com/huggingface/datasets/issues/4863
1,343,737,668
I_kwDODunzps5QF89E
4,863
TFDS wiki_dialog dataset to Huggingface dataset
{ "login": "djaym7", "id": 12378820, "node_id": "MDQ6VXNlcjEyMzc4ODIw", "avatar_url": "https://avatars.githubusercontent.com/u/12378820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/djaym7", "html_url": "https://github.com/djaym7", "followers_url": "https://api.github.com/users/djaym7/followers", "following_url": "https://api.github.com/users/djaym7/following{/other_user}", "gists_url": "https://api.github.com/users/djaym7/gists{/gist_id}", "starred_url": "https://api.github.com/users/djaym7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/djaym7/subscriptions", "organizations_url": "https://api.github.com/users/djaym7/orgs", "repos_url": "https://api.github.com/users/djaym7/repos", "events_url": "https://api.github.com/users/djaym7/events{/privacy}", "received_events_url": "https://api.github.com/users/djaym7/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "@albertvillanova any help ? The linked dataset is in beam format which is similar to wikipedia dataset in huggingface that you scripted..", "Nvm, I was able to port it to huggingface datasets, will upload to the hub soon", "https://huggingface.co/datasets/djaym7/wiki_dialog", "Thanks for the addition, @djaym7." ]
1,660,863,990,000
1,661,161,305,000
1,661,145,533,000
NONE
null
## Adding a Dataset - **Name:** *Wiki_dialog* - **Description: https://github.com/google-research/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A - **Paper: https://arxiv.org/abs/2205.09073 - **Data: https://github.com/google-research/dialog-inpainting - **Motivation:** *Research and Development on biggest corpus of dialog data* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4863/timeline
null
null
null
null
false
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card