url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.23B
node_id
stringlengths
18
32
number
int64
1
4.31k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
1,587B
1,652B
updated_at
int64
1,587B
1,652B
closed_at
int64
1,587B
1,652B
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4311/comments
https://api.github.com/repos/huggingface/datasets/issues/4311/events
https://github.com/huggingface/datasets/pull/4311
1,231,369,438
PR_kwDODunzps43ln8-
4,311
[Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Merging this one since mario is off, I took care of adding some tests to make sure everything is fine. Will do the release after it" ]
1,652,197,935,000
1,652,203,182,000
1,652,202,707,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4311", "html_url": "https://github.com/huggingface/datasets/pull/4311", "diff_url": "https://github.com/huggingface/datasets/pull/4311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4311.patch", "merged_at": 1652202707000 }
I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`. While doing so I also improved a few aspects: - we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary - raise informative error messages when metadata and images aren't linked correctly: - when an image is missing a metadata file - when a metadata file is missing an image I added some tests for these changes as well cc @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4311/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4310/comments
https://api.github.com/repos/huggingface/datasets/issues/4310/events
https://github.com/huggingface/datasets/issues/4310
1,231,319,815
I_kwDODunzps5JZHMH
4,310
Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'
{ "login": "milmin", "id": 72745467, "node_id": "MDQ6VXNlcjcyNzQ1NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/72745467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/milmin", "html_url": "https://github.com/milmin", "followers_url": "https://api.github.com/users/milmin/followers", "following_url": "https://api.github.com/users/milmin/following{/other_user}", "gists_url": "https://api.github.com/users/milmin/gists{/gist_id}", "starred_url": "https://api.github.com/users/milmin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/milmin/subscriptions", "organizations_url": "https://api.github.com/users/milmin/orgs", "repos_url": "https://api.github.com/users/milmin/repos", "events_url": "https://api.github.com/users/milmin/events{/privacy}", "received_events_url": "https://api.github.com/users/milmin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,652,195,573,000
1,652,195,573,000
null
NONE
null
null
null
## Describe the bug Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine. In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket. ## Steps to reproduce the bug ```python from datasets import load_dataset # path is the path to parquet files data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} dataset = load_dataset("parquet", data_files=data_files, streaming=True) ``` ## Expected results A dataset object `datasets.dataset_dict.DatasetDict` ## Actual results ``` AttributeError Traceback (most recent call last) <command-562086> in <module> 11 12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} ---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1679 if streaming: 1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token) -> 1681 return builder_instance.as_streaming_dataset( 1682 split=split, 1683 use_auth_token=use_auth_token, /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token) 904 ) 905 self._check_manual_download(dl_manager) --> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 907 # By default, return all splits 908 if split is None: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager) 30 if not self.config.data_files: 31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}") ---> 32 data_files = dl_manager.download_and_extract(self.config.data_files) 33 if isinstance(data_files, (str, list, tuple)): 34 files = data_files /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls) 798 799 def download_and_extract(self, url_or_urls): --> 800 return self.extract(self.download(url_or_urls)) 801 802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths) 776 777 def extract(self, path_or_paths): --> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True) 779 return urlpaths 780 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc) 312 num_proc = 1 313 if num_proc <= 1 or len(iterable) <= num_proc: --> 314 mapped = [ 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 313 if num_proc <= 1 or len(iterable) <= num_proc: 314 mapped = [ --> 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 317 ] /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 249 # Singleton first to spare some computation 250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 251 return function(data_struct) 252 253 # Reduce logging to keep things readable in multiprocessing with tqdm /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath) 781 def _extract(self, urlpath: str) -> str: 782 urlpath = str(urlpath) --> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token) 784 if protocol is None: 785 # no extraction /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token) 371 urlpath, kwargs = urlpath, {} 372 with fsspec.open(urlpath, **kwargs) as f: --> 373 return _get_extraction_protocol_with_magic_number(f) 374 375 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f) 335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]: 336 """read the magic number from a file-like object and return the compression protocol""" --> 337 prev_loc = f.loc 338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH) 339 f.seek(prev_loc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item) 337 338 def __getattr__(self, item): --> 339 return getattr(self.f, item) 340 341 def __enter__(self): AttributeError: '_io.BufferedReader' object has no attribute 'loc' ``` ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 - `fsspec` version: 2021.08.1 - `s3fs` version: 2021.08.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4310/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4309/comments
https://api.github.com/repos/huggingface/datasets/issues/4309/events
https://github.com/huggingface/datasets/pull/4309
1,231,232,935
PR_kwDODunzps43lKpm
4,309
[WIP] Add TEDLIUM dataset
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4309). All of your documentation changes will be reflected on that endpoint.", "```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\n```\r\nDownloading and preparing dataset tedlium/release1 to /home/sanchitgandhi/cache/tedlium/release1/1.0.1/5a9fcb97b4b52d5a1c9dc7bde4b1d5994cd89c4a3425ea36c789bf6096fee4f0...\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/load.py\", line 1703, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 1240, in _download_and_prepare\r\n raise MissingBeamOptions(\r\ndatasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n `load_dataset('tedlium', 'release1', beam_runner='DirectRunner')`\r\n```\r\nSpecifying the `beam_runner='DirectRunner'` works:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache', beam_runner='DirectRunner')\r\n```" ]
1,652,191,967,000
1,652,200,645,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4309", "html_url": "https://github.com/huggingface/datasets/pull/4309", "diff_url": "https://github.com/huggingface/datasets/pull/4309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4309.patch", "merged_at": null }
Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3 TODO: - [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script - [ ] Make `load_dataset` work - [ ] Run `datasets-cli` command to generate `dataset_infos.json` - [ ] Create dummy data for continuous testing - [ ] Dummy data tests - [ ] Real data tests - [ ] Create the metadata JSON - [ ] Close PR and add directly to the Hub under LIUM org
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4309/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4308/comments
https://api.github.com/repos/huggingface/datasets/issues/4308/events
https://github.com/huggingface/datasets/pull/4308
1,231,217,783
PR_kwDODunzps43lHdP
4,308
Remove unused multiprocessing args from test CLI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4308). All of your documentation changes will be reflected on that endpoint." ]
1,652,191,335,000
1,652,192,606,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4308", "html_url": "https://github.com/huggingface/datasets/pull/4308", "diff_url": "https://github.com/huggingface/datasets/pull/4308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4308.patch", "merged_at": null }
Multiprocessing is not used in the test CLI.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4308/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4307/comments
https://api.github.com/repos/huggingface/datasets/issues/4307/events
https://github.com/huggingface/datasets/pull/4307
1,231,175,639
PR_kwDODunzps43k-Wo
4,307
Add packaged builder configs to the documentation
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652,189,659,000
1,652,191,430,000
1,652,190,954,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4307", "html_url": "https://github.com/huggingface/datasets/pull/4307", "diff_url": "https://github.com/huggingface/datasets/pull/4307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4307.patch", "merged_at": 1652190954000 }
Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4307/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4306/comments
https://api.github.com/repos/huggingface/datasets/issues/4306/events
https://github.com/huggingface/datasets/issues/4306
1,231,137,204
I_kwDODunzps5JYam0
4,306
`load_dataset` does not work with certain filename.
{ "login": "wusuowei60", "id": 57242693, "node_id": "MDQ6VXNlcjU3MjQyNjkz", "avatar_url": "https://avatars.githubusercontent.com/u/57242693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wusuowei60", "html_url": "https://github.com/wusuowei60", "followers_url": "https://api.github.com/users/wusuowei60/followers", "following_url": "https://api.github.com/users/wusuowei60/following{/other_user}", "gists_url": "https://api.github.com/users/wusuowei60/gists{/gist_id}", "starred_url": "https://api.github.com/users/wusuowei60/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wusuowei60/subscriptions", "organizations_url": "https://api.github.com/users/wusuowei60/orgs", "repos_url": "https://api.github.com/users/wusuowei60/repos", "events_url": "https://api.github.com/users/wusuowei60/events{/privacy}", "received_events_url": "https://api.github.com/users/wusuowei60/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Never mind. It is because of the caching of datasets..." ]
1,652,188,444,000
1,652,209,116,000
1,652,209,089,000
NONE
null
null
null
## Describe the bug This is a weird bug that took me some time to find out. I have a JSON dataset that I want to load with `load_dataset` like this: ``` data_files = dict(train="train.json.zip", val="val.json.zip") dataset = load_dataset("json", data_files=data_files, field="data") ``` ## Expected results No error. ## Actual results The val file is loaded as expected, but the train file throws JSON decoding error: ``` ╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮ │ <ipython-input-74-97947e92c100>:5 in <module> │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in │ │ load_dataset │ │ │ │ 1684 │ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES │ │ 1685 │ │ │ 1686 │ # Download and prepare data │ │ ❱ 1687 │ builder_instance.download_and_prepare( │ │ 1688 │ │ download_config=download_config, │ │ 1689 │ │ download_mode=download_mode, │ │ 1690 │ │ ignore_verifications=ignore_verifications, │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in │ │ download_and_prepare │ │ │ │ 602 │ │ │ │ │ │ except ConnectionError: │ │ 603 │ │ │ │ │ │ │ logger.warning("HF google storage unreachable. Downloa │ │ 604 │ │ │ │ │ if not downloaded_from_gcs: │ │ ❱ 605 │ │ │ │ │ │ self._download_and_prepare( │ │ 606 │ │ │ │ │ │ │ dl_manager=dl_manager, verify_infos=verify_infos, **do │ │ 607 │ │ │ │ │ │ ) │ │ 608 │ │ │ │ │ # Sync info │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in │ │ _download_and_prepare │ │ │ │ 691 │ │ │ │ │ 692 │ │ │ try: │ │ 693 │ │ │ │ # Prepare split will record examples associated to the split │ │ ❱ 694 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │ │ 695 │ │ │ except OSError as e: │ │ 696 │ │ │ │ raise OSError( │ │ 697 │ │ │ │ │ "Cannot find data file. " │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in │ │ _prepare_split │ │ │ │ 1148 │ │ │ │ 1149 │ │ generator = self._generate_tables(**split_generator.gen_kwargs) │ │ 1150 │ │ with ArrowWriter(features=self.info.features, path=fpath) as writer: │ │ ❱ 1151 │ │ │ for key, table in logging.tqdm( │ │ 1152 │ │ │ │ generator, unit=" tables", leave=False, disable=True # not loggin │ │ 1153 │ │ │ ): │ │ 1154 │ │ │ │ writer.write_table(table) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in │ │ __iter__ │ │ │ │ 254 │ │ │ 255 │ def __iter__(self): │ │ 256 │ │ try: │ │ ❱ 257 │ │ │ for obj in super(tqdm_notebook, self).__iter__(): │ │ 258 │ │ │ │ # return super(tqdm...) will not catch exception │ │ 259 │ │ │ │ yield obj │ │ 260 │ │ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in │ │ __iter__ │ │ │ │ 1180 │ │ # If the bar is disabled, then just walk the iterable │ │ 1181 │ │ # (note: keep this check outside the loop for performance) │ │ 1182 │ │ if self.disable: │ │ ❱ 1183 │ │ │ for obj in iterable: │ │ 1184 │ │ │ │ yield obj │ │ 1185 │ │ │ return │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j │ │ son/json.py:90 in _generate_tables │ │ │ │ 87 │ │ │ # If the file is one json object and if we need to look at the list of │ │ 88 │ │ │ if self.config.field is not None: │ │ 89 │ │ │ │ with open(file, encoding="utf-8") as f: │ │ ❱ 90 │ │ │ │ │ dataset = json.load(f) │ │ 91 │ │ │ │ │ │ 92 │ │ │ │ # We keep only the field we are interested in │ │ 93 │ │ │ │ dataset = dataset[self.config.field] │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load │ │ │ │ 290 │ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` │ │ 291 │ kwarg; otherwise ``JSONDecoder`` is used. │ │ 292 │ """ │ │ ❱ 293 │ return loads(fp.read(), │ │ 294 │ │ cls=cls, object_hook=object_hook, │ │ 295 │ │ parse_float=parse_float, parse_int=parse_int, │ │ 296 │ │ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads │ │ │ │ 354 │ if (cls is None and object_hook is None and │ │ 355 │ │ │ parse_int is None and parse_float is None and │ │ 356 │ │ │ parse_constant is None and object_pairs_hook is None and not kw): │ │ ❱ 357 │ │ return _default_decoder.decode(s) │ │ 358 │ if cls is None: │ │ 359 │ │ cls = JSONDecoder │ │ 360 │ if object_hook is not None: │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode │ │ │ │ 334 │ │ containing a JSON document). │ │ 335 │ │ │ │ 336 │ │ """ │ │ ❱ 337 │ │ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) │ │ 338 │ │ end = _w(s, end).end() │ │ 339 │ │ if end != len(s): │ │ 340 │ │ │ raise JSONDecodeError("Extra data", s, end) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode │ │ │ │ 350 │ │ │ │ 351 │ │ """ │ │ 352 │ │ try: │ │ ❱ 353 │ │ │ obj, end = self.scan_once(s, idx) │ │ 354 │ │ except StopIteration as err: │ │ 355 │ │ │ raise JSONDecodeError("Expecting value", s, err.value) from None │ │ 356 │ │ return obj, end │ ╰───────────────────────────────────────────────────────────────────────────────────────────╯ JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051) ``` However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well. ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4306/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4305/comments
https://api.github.com/repos/huggingface/datasets/issues/4305/events
https://github.com/huggingface/datasets/pull/4305
1,231,099,934
PR_kwDODunzps43kt4P
4,305
Fixes FrugalScore
{ "login": "moussaKam", "id": 28675016, "node_id": "MDQ6VXNlcjI4Njc1MDE2", "avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moussaKam", "html_url": "https://github.com/moussaKam", "followers_url": "https://api.github.com/users/moussaKam/followers", "following_url": "https://api.github.com/users/moussaKam/following{/other_user}", "gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}", "starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions", "organizations_url": "https://api.github.com/users/moussaKam/orgs", "repos_url": "https://api.github.com/users/moussaKam/repos", "events_url": "https://api.github.com/users/moussaKam/events{/privacy}", "received_events_url": "https://api.github.com/users/moussaKam/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4305). All of your documentation changes will be reflected on that endpoint." ]
1,652,186,646,000
1,652,187,343,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4305", "html_url": "https://github.com/huggingface/datasets/pull/4305", "diff_url": "https://github.com/huggingface/datasets/pull/4305.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4305.patch", "merged_at": null }
There are two minor modifications in this PR: 1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper. 2) I switched to dynamic padding that was was used in the training, forcing the padding to `max_length` introduces errors for some reason that I ignore. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4305/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4304/comments
https://api.github.com/repos/huggingface/datasets/issues/4304/events
https://github.com/huggingface/datasets/issues/4304
1,231,047,051
I_kwDODunzps5JYEmL
4,304
Language code search does direct matches
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Thanks for reporting ! I forwarded the issue to the front-end team :)\r\n\r\nWill keep you posted !\r\n\r\nI also changed the tagging app to suggest two letters code for now." ]
1,652,183,956,000
1,652,186,322,000
null
NONE
null
null
null
## Describe the bug Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging) encourages addition of the additional codes ("_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_") but this would lead to those datasets being hidden in datasets search. ## Steps to reproduce the bug 1. Add a dataset using a variant tag (e.g. [`sq-AL`](https://huggingface.co/datasets?languages=languages:sq-AL)) 2. Look for datasets using the full code 3. Note that they're missing when just the language is searched for (e.g. [`sq`](https://huggingface.co/datasets?languages=languages:sq)) Some datasets are already affected by this - e.g. `AmazonScience/massive` is listed under `sq-AL` but not `sq`. One workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :) ## Expected results Datasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`). ## Actual results The language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches. ## Environment info (web app)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4304/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4303/comments
https://api.github.com/repos/huggingface/datasets/issues/4303/events
https://github.com/huggingface/datasets/pull/4303
1,230,867,728
PR_kwDODunzps43j8cH
4,303
Fix: Add missing comma
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,652,174,498,000
1,652,174,498,000
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4303", "html_url": "https://github.com/huggingface/datasets/pull/4303", "diff_url": "https://github.com/huggingface/datasets/pull/4303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4303.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4303/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4302/comments
https://api.github.com/repos/huggingface/datasets/issues/4302/events
https://github.com/huggingface/datasets/pull/4302
1,230,651,117
PR_kwDODunzps43jPE5
4,302
Remove hacking license tags when mirroring datasets on the Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4302). All of your documentation changes will be reflected on that endpoint.", "The Hub doesn't allow these characters in the YAML tags, and git push fails if you want to push a dataset card containing these characters." ]
1,652,161,966,000
1,652,204,850,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4302", "html_url": "https://github.com/huggingface/datasets/pull/4302", "diff_url": "https://github.com/huggingface/datasets/pull/4302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4302.patch", "merged_at": null }
Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub. I guess this hacking is no longer necessary: - it is not applied to community datasets - all canonical datasets are validated by maintainers before being merged: CI + maintainers make sure license tags are the right ones Fix #4298.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4302/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4302/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4301/comments
https://api.github.com/repos/huggingface/datasets/issues/4301/events
https://github.com/huggingface/datasets/pull/4301
1,230,401,256
PR_kwDODunzps43idlE
4,301
Add ImageNet-Sketch dataset
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4301). All of your documentation changes will be reflected on that endpoint." ]
1,652,139,525,000
1,652,211,471,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4301", "html_url": "https://github.com/huggingface/datasets/pull/4301", "diff_url": "https://github.com/huggingface/datasets/pull/4301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4301.patch", "merged_at": null }
This PR adds the ImageNet-Sketch dataset and resolves #3953 .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4301/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4300/comments
https://api.github.com/repos/huggingface/datasets/issues/4300/events
https://github.com/huggingface/datasets/pull/4300
1,230,272,761
PR_kwDODunzps43iA86
4,300
Add API code examples for loading methods
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4300). All of your documentation changes will be reflected on that endpoint." ]
1,652,131,826,000
1,652,132,501,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4300", "html_url": "https://github.com/huggingface/datasets/pull/4300", "diff_url": "https://github.com/huggingface/datasets/pull/4300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4300.patch", "merged_at": null }
This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :) I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me: ```py from datasets import inspect_dataset inspect_dataset('rotten_tomatoes', local_path='/content/rotten_tomatoes') FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory. ``` Does the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4300/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4299/comments
https://api.github.com/repos/huggingface/datasets/issues/4299/events
https://github.com/huggingface/datasets/pull/4299
1,230,236,782
PR_kwDODunzps43h5RP
4,299
Remove manual download from imagenet-1k
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4299). All of your documentation changes will be reflected on that endpoint." ]
1,652,129,358,000
1,652,130,131,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4299", "html_url": "https://github.com/huggingface/datasets/pull/4299", "diff_url": "https://github.com/huggingface/datasets/pull/4299.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4299.patch", "merged_at": null }
Remove the manual download code from `imagenet-1k` to make it a regular dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4299/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4298/comments
https://api.github.com/repos/huggingface/datasets/issues/4298/events
https://github.com/huggingface/datasets/issues/4298
1,229,748,006
I_kwDODunzps5JTHcm
4,298
Normalise license names
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "we'll add the same server-side metadata validation system as for hf.co/models soon-ish\r\n\r\n(you can check on hf.co/models that licenses are \"clean\")" ]
1,652,104,292,000
1,652,204,645,000
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the dupes is probably due to a bit of variation in metadata. **Describe the solution you'd like** I'd like the licenses in metadata to follow the same standard as much as possible, to remove this problem. I'd like to go ahead and normalise the dataset metadata to follow the format & values given in [src/datasets/utils/resources/licenses.json](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/licenses.json) . **Describe alternatives you've considered** None **Additional context** None **Priority** Low
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4298/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4298/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4297/comments
https://api.github.com/repos/huggingface/datasets/issues/4297/events
https://github.com/huggingface/datasets/issues/4297
1,229,735,498
I_kwDODunzps5JTEZK
4,297
Datasets YAML tagging space is down
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess", "Thanks for reporting, fixing it now", "It's up again :)" ]
1,652,103,905,000
1,652,107,465,000
1,652,107,465,000
NONE
null
null
null
## Describe the bug The neat hf spaces app for generating YAML tags for dataset `README.md`s is down ## Steps to reproduce the bug 1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging ## Expected results There'll be a HF spaces web app for generating dataset metadata YAML ## Actual results There's an error message; here's the step where it breaks: ``` Step 18/29 : RUN pip install -r requirements.txt ---> Running in e88bfe7e7e0c Defaulting to user installation because normal site-packages is not writeable Collecting git+https://github.com/huggingface/datasets.git@update-task-list (from -r requirements.txt (line 4)) Cloning https://github.com/huggingface/datasets.git (to revision update-task-list) to /tmp/pip-req-build-bm8t0r0k Running command git clone --filter=blob:none --quiet https://github.com/huggingface/datasets.git /tmp/pip-req-build-bm8t0r0k WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref. Running command git checkout -q update-task-list error: pathspec 'update-task-list' did not match any file(s) known to git error: subprocess-exited-with-error × git checkout -q update-task-list did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × git checkout -q update-task-list did not run successfully. │ exit code: 1 ╰─> See above for output. ``` ## Environment info - Platform: Linux / Brave
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4297/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4296/comments
https://api.github.com/repos/huggingface/datasets/issues/4296/events
https://github.com/huggingface/datasets/pull/4296
1,229,554,645
PR_kwDODunzps43foZ-
4,296
Fix URL query parameters in compression hop path when streaming
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4296). All of your documentation changes will be reflected on that endpoint." ]
1,652,095,102,000
1,652,096,518,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4296", "html_url": "https://github.com/huggingface/datasets/pull/4296", "diff_url": "https://github.com/huggingface/datasets/pull/4296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4296.patch", "merged_at": null }
Fix #3488.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4296/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4295/comments
https://api.github.com/repos/huggingface/datasets/issues/4295/events
https://github.com/huggingface/datasets/pull/4295
1,229,527,283
PR_kwDODunzps43fieR
4,295
Fix missing lz4 dependency for tests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652,093,600,000
1,652,095,282,000
1,652,094,824,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4295", "html_url": "https://github.com/huggingface/datasets/pull/4295", "diff_url": "https://github.com/huggingface/datasets/pull/4295.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4295.patch", "merged_at": 1652094824000 }
Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4295/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4294/comments
https://api.github.com/repos/huggingface/datasets/issues/4294/events
https://github.com/huggingface/datasets/pull/4294
1,229,455,582
PR_kwDODunzps43fTXA
4,294
Fix CLI run_beam save_infos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,652,089,663,000
1,652,166,244,000
1,652,165,770,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4294", "html_url": "https://github.com/huggingface/datasets/pull/4294", "diff_url": "https://github.com/huggingface/datasets/pull/4294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4294.patch", "merged_at": 1652165770000 }
Currently, it raises TypeError: ``` TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4294/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4293/comments
https://api.github.com/repos/huggingface/datasets/issues/4293/events
https://github.com/huggingface/datasets/pull/4293
1,228,815,477
PR_kwDODunzps43dRt9
4,293
Fix wrong map parameter name in cache docs
{ "login": "h4iku", "id": 3812788, "node_id": "MDQ6VXNlcjM4MTI3ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h4iku", "html_url": "https://github.com/h4iku", "followers_url": "https://api.github.com/users/h4iku/followers", "following_url": "https://api.github.com/users/h4iku/following{/other_user}", "gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}", "starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h4iku/subscriptions", "organizations_url": "https://api.github.com/users/h4iku/orgs", "repos_url": "https://api.github.com/users/h4iku/repos", "events_url": "https://api.github.com/users/h4iku/events{/privacy}", "received_events_url": "https://api.github.com/users/h4iku/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4293). All of your documentation changes will be reflected on that endpoint." ]
1,651,994,866,000
1,651,995,554,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4293", "html_url": "https://github.com/huggingface/datasets/pull/4293", "diff_url": "https://github.com/huggingface/datasets/pull/4293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4293.patch", "merged_at": null }
The `load_from_cache` parameter of `map` should be `load_from_cache_file`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4293/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4292/comments
https://api.github.com/repos/huggingface/datasets/issues/4292/events
https://github.com/huggingface/datasets/pull/4292
1,228,216,788
PR_kwDODunzps43bhrp
4,292
Add API code examples for remaining main classes
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4292). All of your documentation changes will be reflected on that endpoint." ]
1,651,860,931,000
1,652,118,723,000
null
MEMBER
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4292", "html_url": "https://github.com/huggingface/datasets/pull/4292", "diff_url": "https://github.com/huggingface/datasets/pull/4292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4292.patch", "merged_at": null }
This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4292/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4291/comments
https://api.github.com/repos/huggingface/datasets/issues/4291/events
https://github.com/huggingface/datasets/issues/4291
1,227,777,500
I_kwDODunzps5JLmXc
4,291
Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in your case, that is due to the data file being TAR. This format is not streamable out of the box (it does not allow random access to the archived files), but we use a trick to allow streaming: using `dl_manager.iter_archive`.\r\n\r\nLet me know if you need some help: I could push a commit to your repo with the fix.", "Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :)" ]
1,651,838,607,000
1,652,084,758,000
1,652,084,758,000
NONE
null
null
null
### Link https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train ### Description The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss? ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4291/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4290/comments
https://api.github.com/repos/huggingface/datasets/issues/4290/events
https://github.com/huggingface/datasets/pull/4290
1,227,592,826
PR_kwDODunzps43Zr08
4,290
Update README.md
{ "login": "monk1337", "id": 17107749, "node_id": "MDQ6VXNlcjE3MTA3NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monk1337", "html_url": "https://github.com/monk1337", "followers_url": "https://api.github.com/users/monk1337/followers", "following_url": "https://api.github.com/users/monk1337/following{/other_user}", "gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monk1337/subscriptions", "organizations_url": "https://api.github.com/users/monk1337/orgs", "repos_url": "https://api.github.com/users/monk1337/repos", "events_url": "https://api.github.com/users/monk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/monk1337/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4290). All of your documentation changes will be reflected on that endpoint." ]
1,651,827,171,000
1,651,827,848,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4290", "html_url": "https://github.com/huggingface/datasets/pull/4290", "diff_url": "https://github.com/huggingface/datasets/pull/4290.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4290.patch", "merged_at": null }
Updating readme in medmcqa dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4290/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4288/comments
https://api.github.com/repos/huggingface/datasets/issues/4288/events
https://github.com/huggingface/datasets/pull/4288
1,226,821,732
PR_kwDODunzps43XLKi
4,288
Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,651,764,109,000
1,652,187,306,000
1,652,184,588,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4288", "html_url": "https://github.com/huggingface/datasets/pull/4288", "diff_url": "https://github.com/huggingface/datasets/pull/4288.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4288.patch", "merged_at": 1652184588000 }
This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4288/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4287/comments
https://api.github.com/repos/huggingface/datasets/issues/4287/events
https://github.com/huggingface/datasets/issues/4287
1,226,806,652
I_kwDODunzps5JH5V8
4,287
"NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L249 when trying to `ds_with_embeddings.add_faiss_index(column='embeddings', device=0)` with the code above.\r\n\r\nAs it seems that the `@staticmethod` doesn't recognize the `import faiss` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L261, so whenever the value of `device` is not None in https://github.com/huggingface/datasets/blob/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66/src/datasets/search.py#L438, that exception is triggered.\r\n\r\nSo on, adding `import faiss` inside https://github.com/huggingface/datasets/blob/71f76e0bdeaddadedc4f9c8d15cfff5a36d62f66/src/datasets/search.py#L305 right after the check of `device`'s value, solves the issue and lets you calculate the indices in GPU.\r\n\r\nI'll add the code in a PR linked to this issue in case you want to merge it!", "Adding here the complete error traceback!\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/alvarobartt/lol.py\", line 12, in <module>\r\n ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py\", line 3656, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 478, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=True)\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 281, in add_vectors\r\n self.faiss_index = self._faiss_index_to_device(index, self.device)\r\n File \"/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/search.py\", line 327, in _faiss_index_to_device\r\n faiss_res = faiss.StandardGpuResources()\r\nNameError: name 'faiss' is not defined\r\n```", "Closed as https://github.com/huggingface/datasets/pull/4288 already merged! :hugs:" ]
1,651,763,385,000
1,652,190,799,000
1,652,190,799,000
CONTRIBUTOR
null
null
null
## Describe the bug When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception. All that assuming that `datasets` is properly installed and `faiss-gpu` too, as well as all the CUDA drivers required. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from transformers import DPRContextEncoder, DPRContextEncoderTokenizer import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset ds = load_dataset('crime_and_punish', split='train[:100]') ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()}) ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None` ``` ## Expected results A new column named `embeddings` in the dataset that we're adding the index to. ## Actual results An exception is triggered with the following message `NameError: name 'faiss' is not defined`. ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.13.0-1022-azure-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4287/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4286/comments
https://api.github.com/repos/huggingface/datasets/issues/4286/events
https://github.com/huggingface/datasets/pull/4286
1,226,758,621
PR_kwDODunzps43W-DI
4,286
Add Lahnda language tag
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,761,260,000
1,652,184,604,000
1,652,184,158,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4286", "html_url": "https://github.com/huggingface/datasets/pull/4286", "diff_url": "https://github.com/huggingface/datasets/pull/4286.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4286.patch", "merged_at": 1652184157000 }
This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4286/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4285/comments
https://api.github.com/repos/huggingface/datasets/issues/4285/events
https://github.com/huggingface/datasets/pull/4285
1,226,374,831
PR_kwDODunzps43VtEa
4,285
Update LexGLUE README.md
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,739,810,000
1,651,757,944,000
1,651,757,615,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4285", "html_url": "https://github.com/huggingface/datasets/pull/4285", "diff_url": "https://github.com/huggingface/datasets/pull/4285.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4285.patch", "merged_at": 1651757615000 }
Update the leaderboard based on the latest results presented in the ACL 2022 version of the article.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4285/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4284/comments
https://api.github.com/repos/huggingface/datasets/issues/4284/events
https://github.com/huggingface/datasets/issues/4284
1,226,200,727
I_kwDODunzps5JFlaX
4,284
Issues in processing very large datasets
{ "login": "sajastu", "id": 10419055, "node_id": "MDQ6VXNlcjEwNDE5MDU1", "avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sajastu", "html_url": "https://github.com/sajastu", "followers_url": "https://api.github.com/users/sajastu/followers", "following_url": "https://api.github.com/users/sajastu/following{/other_user}", "gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}", "starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sajastu/subscriptions", "organizations_url": "https://api.github.com/users/sajastu/orgs", "repos_url": "https://api.github.com/users/sajastu/repos", "events_url": "https://api.github.com/users/sajastu/events{/privacy}", "received_events_url": "https://api.github.com/users/sajastu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! `datasets` doesn't load the dataset in memory. Instead it uses memory mapping to load your dataset from your disk (it is stored as arrow files). Do you know at what point you have RAM issues exactly ?\r\n\r\nHow big are your graph_data_train dictionaries btw ?" ]
1,651,726,869,000
1,652,184,923,000
null
NONE
null
null
null
## Describe the bug I'm trying to add a feature called "subgraph" to CNN/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Cannot allocate memory]` appears. I suppose this problem roots in RAM issues and how the dataset is loaded during training, but I have no clue of what I can do to fix it. Observing the dataset's cache directory, I see that it takes ~600GB of memory and that's why I believe special care is needed when loading it into the memory. Here are my modifications to `run_summarization.py` code. ``` # loading pre-computed dictionary where keys are 'id' of article and values are corresponding subgraph graph_data_train = get_graph_data('train') graph_data_validation = get_graph_data('val') ... ... with training_args.main_process_first(desc="train dataset map pre-processing"): train_dataset = train_dataset.map( preprocess_function_train, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on train dataset", ) ``` And here is the modified preprocessed function: ``` def preprocess_function_train(examples): inputs, targets, sub_graphs, ids = [], [], [], [] for i in range(len(examples[text_column])): if examples[text_column][i] is not None and examples[summary_column][i] is not None: # if examples['doc_id'][i] in graph_data.keys(): inputs.append(examples[text_column][i]) targets.append(examples[summary_column][i]) sub_graphs.append(graph_data_train[examples['id'][i]]) ids.append(examples['id'][i]) inputs = [prefix + inp for inp in inputs] model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True, sub_graphs=sub_graphs, ids=ids) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True) # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore # padding in the loss. if padding == "max_length" and data_args.ignore_pad_token_for_loss: labels["input_ids"] = [ [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"] ] model_inputs["labels"] = labels["input_ids"] return model_inputs ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux Ubuntu - Python version: 3.6 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4284/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4283/comments
https://api.github.com/repos/huggingface/datasets/issues/4283/events
https://github.com/huggingface/datasets/pull/4283
1,225,686,988
PR_kwDODunzps43Tnxo
4,283
Fix filesystem docstring
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,686,162,000
1,651,854,722,000
1,651,818,137,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4283", "html_url": "https://github.com/huggingface/datasets/pull/4283", "diff_url": "https://github.com/huggingface/datasets/pull/4283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4283.patch", "merged_at": 1651818137000 }
This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4283/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4282/comments
https://api.github.com/repos/huggingface/datasets/issues/4282/events
https://github.com/huggingface/datasets/pull/4282
1,225,616,545
PR_kwDODunzps43TZYL
4,282
Don't do unnecessary list type casting to avoid replacing None values by empty lists
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Quick question about the message in the warning. You say \"will be fixed in a future major version\" but don't you mean \"will raise an error in a future major version\"?", "Right ! Good catch, thanks, I updated the message to say \"will raise an error in a future major version\"" ]
1,651,682,221,000
1,651,833,838,000
1,651,833,420,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4282", "html_url": "https://github.com/huggingface/datasets/pull/4282", "diff_url": "https://github.com/huggingface/datasets/pull/4282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4282.patch", "merged_at": 1651833420000 }
In certain cases, `None` values are replaced by empty lists when casting feature types. It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https://github.com/huggingface/datasets/issues/3676 This issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https://github.com/huggingface/datasets/issues/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type. In this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary. I also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so This PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # before: # b # 0 [None, [0]] # 1 [[], [0]] # 2 [[], [0]] # 3 [[], [0]] # # now: # b # 0 [None, [0]] # 1 [None, [0]] # 2 [None, [0]] # 3 [None, [0]] ``` cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4282/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4281/comments
https://api.github.com/repos/huggingface/datasets/issues/4281/events
https://github.com/huggingface/datasets/pull/4281
1,225,556,939
PR_kwDODunzps43TNBm
4,281
Remove a copy-paste sentence in dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The non-passing tests have nothing to do with this PR." ]
1,651,678,915,000
1,651,826,283,000
1,651,689,196,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4281", "html_url": "https://github.com/huggingface/datasets/pull/4281", "diff_url": "https://github.com/huggingface/datasets/pull/4281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4281.patch", "merged_at": 1651689196000 }
Remove the following copy-paste sentence from dataset cards: ``` We show detailed information for up to 5 configurations of the dataset. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4281/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4280/comments
https://api.github.com/repos/huggingface/datasets/issues/4280/events
https://github.com/huggingface/datasets/pull/4280
1,225,446,844
PR_kwDODunzps43S2xg
4,280
Add missing features to commonsense_qa dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova it adds question_concept and id which is great. I suppose we'll talk about staying true to the format on another PR. ", "Yes, let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the dataset feature structure." ]
1,651,674,266,000
1,651,847,037,000
1,651,846,606,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4280", "html_url": "https://github.com/huggingface/datasets/pull/4280", "diff_url": "https://github.com/huggingface/datasets/pull/4280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4280.patch", "merged_at": 1651846606000 }
Fix partially #4275.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4280/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4279/comments
https://api.github.com/repos/huggingface/datasets/issues/4279/events
https://github.com/huggingface/datasets/pull/4279
1,225,300,273
PR_kwDODunzps43SXw5
4,279
Update minimal PyArrow version warning
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,667,169,000
1,651,740,658,000
1,651,740,227,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4279", "html_url": "https://github.com/huggingface/datasets/pull/4279", "diff_url": "https://github.com/huggingface/datasets/pull/4279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4279.patch", "merged_at": 1651740227000 }
Update the minimal PyArrow version warning (should've been part of #4250).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4279/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4278/comments
https://api.github.com/repos/huggingface/datasets/issues/4278/events
https://github.com/huggingface/datasets/pull/4278
1,225,122,123
PR_kwDODunzps43RyTs
4,278
Add missing features to openbookqa dataset for additional config
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the data feature structure." ]
1,651,656,170,000
1,651,842,800,000
1,651,842,361,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4278", "html_url": "https://github.com/huggingface/datasets/pull/4278", "diff_url": "https://github.com/huggingface/datasets/pull/4278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4278.patch", "merged_at": 1651842361000 }
Fix partially #4276.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4278/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4277/comments
https://api.github.com/repos/huggingface/datasets/issues/4277/events
https://github.com/huggingface/datasets/pull/4277
1,225,002,286
PR_kwDODunzps43RZV9
4,277
Enable label alignment for token classification datasets
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm, not sure why the Windows tests are failing with:\r\n\r\n```\r\nDid not find path entry C:\\tools\\miniconda3\\bin\r\nC:\\tools\\miniconda3\\envs\\py37\\python.exe: No module named pytest\r\n```\r\n\r\nEdit: running the CI again fixed the problem 🙃 ", "> One last nit and we can merge then\r\n\r\nThanks, done!" ]
1,651,648,516,000
1,651,851,735,000
1,651,851,391,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4277", "html_url": "https://github.com/huggingface/datasets/pull/4277", "diff_url": "https://github.com/huggingface/datasets/pull/4277.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4277.patch", "merged_at": 1651851391000 }
This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER). Example of usage: ```python from datasets import load_dataset ner_ds = load_dataset("conll2003", split="train") # returns [3, 0, 7, 0, 0, 0, 7, 0, 0] ner_ds[0]["ner_tags"] # hypothetical model mapping with O <--> B-LOC label2id = { "B-LOC": "0", "B-MISC": "7", "B-ORG": "3", "B-PER": "1", "I-LOC": "6", "I-MISC": "8", "I-ORG": "4", "I-PER": "2", "O": "5" } ner_aligned_ds = ner_ds.align_labels_with_mapping(label2id, "ner_tags") # returns [3, 5, 7, 5, 5, 5, 7, 5, 5] ner_aligned_ds[0]["ner_tags"] ``` Context: we need this in AutoTrain to automatically align datasets / models during evaluation. cc @abhishekkrthakur
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4277/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4276/comments
https://api.github.com/repos/huggingface/datasets/issues/4276/events
https://github.com/huggingface/datasets/issues/4276
1,224,949,252
I_kwDODunzps5JAz4E
4,276
OpenBookQA has missing and inconsistent field names
{ "login": "vblagoje", "id": 458335, "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vblagoje", "html_url": "https://github.com/vblagoje", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "repos_url": "https://api.github.com/users/vblagoje/repos", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ", "Ok, awesome @albertvillanova How about #4275 ?", "On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.\r\n\r\nFor example, other datasets also flatten \"question.stem\" into \"question\":\r\n- ai2_arc:\r\n ```python\r\n question = data[\"question\"][\"stem\"]\r\n choices = data[\"question\"][\"choices\"]\r\n text_choices = [choice[\"text\"] for choice in choices]\r\n label_choices = [choice[\"label\"] for choice in choices]\r\n yield id_, {\r\n \"id\": id_,\r\n \"answerKey\": answerkey,\r\n \"question\": question,\r\n \"choices\": {\"text\": text_choices, \"label\": label_choices},\r\n }\r\n ```\r\n- commonsense_qa:\r\n ```python\r\n question = data[\"question\"]\r\n stem = question[\"stem\"]\r\n yield id_, {\r\n \"answerKey\": answerkey,\r\n \"question\": stem,\r\n \"choices\": {\"label\": labels, \"text\": texts},\r\n }\r\n ```\r\n- cos_e:\r\n ```python\r\n \"question\": cqa[\"question\"][\"stem\"],\r\n ```\r\n- qasc\r\n- quartz\r\n- wiqa\r\n\r\nExceptions:\r\n- exams\r\n\r\nI think we should agree on a CONVENIENT format for QA and use always CONSISTENTLY the same.", "@albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just because we think something makes more sense. I am in that position now (downloading original data rather than using HF Datasets) and undoubtedly it hinders HF Datasets' widespread use and adoption. Missing fields like in the case of #4275 is definitely bad and not even up for a discussion IMHO! cc @lhoestq ", "I'm opening a PR that adds the missing fields.\r\n\r\nLet's agree on the feature structure: @lhoestq @mariosasko @polinaeterna ", "IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case).", "I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibility. Users who relied on the old format will update their code with either the util method for a quick fix or slightly more elaborate for the new. ", "I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.\r\n\r\nThere is always the tension between:\r\n- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),\r\n- and on the other hand performing some king of standardization/harmonization depending on the task (this has the advantage that once learnt, the same structure applies to all datasets; this has been done for e.g. POS tagging: all datasets have been adapted to a certain \"standard\" structure).\r\n - Another advantage: datasets can easily be interchanged (or joined) to be used by the same model\r\n\r\nRecently, in the BigScience BioMedical hackathon, they adopted a different approach:\r\n- they implement a \"source\" config, respecting the original structure as much as possible\r\n- they implement additional config for each task, with a \"standard\" nested structure per task, which is most useful for users.", "@albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once all the data is there, and users can create lambda functions to create whatever structure serves them best. " ]
1,651,643,512,000
1,652,013,223,000
null
CONTRIBUTOR
null
null
null
## Describe the bug OpenBookQA implementation is inconsistent with the original dataset. We need to: 1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format. 2. Add missing additional fields: - 'fact1': row['fact1'], - 'humanScore': row['humanScore'], - 'clarity': row['clarity'], - 'turkIdAnonymized': row['turkIdAnonymized'] 3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Expected results The structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4276/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4275/comments
https://api.github.com/repos/huggingface/datasets/issues/4275/events
https://github.com/huggingface/datasets/issues/4275
1,224,943,414
I_kwDODunzps5JAyc2
4,275
CommonSenseQA has missing and inconsistent field names
{ "login": "vblagoje", "id": 458335, "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vblagoje", "html_url": "https://github.com/vblagoje", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "repos_url": "https://api.github.com/users/vblagoje/repos", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. " ]
1,651,642,739,000
1,651,664,478,000
null
CONTRIBUTOR
null
null
null
## Describe the bug In short, CommonSenseQA implementation is inconsistent with the original dataset. More precisely, we need to: 1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id. 2. The [“question”][“stem”] field is flattened into "question". We should match the original dataset and unflatten it 3. Add the missing "question_concept" field in the question tree node 4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original ## Expected results Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4275/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4274/comments
https://api.github.com/repos/huggingface/datasets/issues/4274/events
https://github.com/huggingface/datasets/pull/4274
1,224,740,303
PR_kwDODunzps43Qm2w
4,274
Add API code examples for IterableDataset
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,617,857,000
1,651,681,772,000
1,651,681,324,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4274", "html_url": "https://github.com/huggingface/datasets/pull/4274", "diff_url": "https://github.com/huggingface/datasets/pull/4274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4274.patch", "merged_at": 1651681324000 }
This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4274/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4273/comments
https://api.github.com/repos/huggingface/datasets/issues/4273/events
https://github.com/huggingface/datasets/pull/4273
1,224,681,036
PR_kwDODunzps43QaA6
4,273
leadboard info added for TNE
{ "login": "yanaiela", "id": 8031035, "node_id": "MDQ6VXNlcjgwMzEwMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanaiela", "html_url": "https://github.com/yanaiela", "followers_url": "https://api.github.com/users/yanaiela/followers", "following_url": "https://api.github.com/users/yanaiela/following{/other_user}", "gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions", "organizations_url": "https://api.github.com/users/yanaiela/orgs", "repos_url": "https://api.github.com/users/yanaiela/repos", "events_url": "https://api.github.com/users/yanaiela/events{/privacy}", "received_events_url": "https://api.github.com/users/yanaiela/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,613,741,000
1,651,757,124,000
1,651,756,693,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4273", "html_url": "https://github.com/huggingface/datasets/pull/4273", "diff_url": "https://github.com/huggingface/datasets/pull/4273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4273.patch", "merged_at": 1651756693000 }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4273/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4272/comments
https://api.github.com/repos/huggingface/datasets/issues/4272/events
https://github.com/huggingface/datasets/pull/4272
1,224,635,660
PR_kwDODunzps43QQQt
4,272
Fix typo in logging docs
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> This PR fixes #4271.\r\n\r\nThings have not changed when searching \"tqdm\" in the Dataset document. The second result still performs as \"Enable\".", "Hi @jiangwy99, the fix will appear on the `main` version of the docs:\r\n\r\n![Screen Shot 2022-05-04 at 8 38 29 AM](https://user-images.githubusercontent.com/59462357/166718225-6848ab91-87d1-4572-9912-40a909af6cb9.png)\r\n", "Fixed now, thanks." ]
1,651,610,877,000
1,651,678,947,000
1,651,647,516,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4272", "html_url": "https://github.com/huggingface/datasets/pull/4272", "diff_url": "https://github.com/huggingface/datasets/pull/4272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4272.patch", "merged_at": 1651647515000 }
This PR fixes #4271.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4272/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4271/comments
https://api.github.com/repos/huggingface/datasets/issues/4271/events
https://github.com/huggingface/datasets/issues/4271
1,224,404,403
I_kwDODunzps5I-u2z
4,271
A typo in docs of datasets.disable_progress_bar
{ "login": "jiangwy99", "id": 39762734, "node_id": "MDQ6VXNlcjM5NzYyNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiangwy99", "html_url": "https://github.com/jiangwy99", "followers_url": "https://api.github.com/users/jiangwy99/followers", "following_url": "https://api.github.com/users/jiangwy99/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions", "organizations_url": "https://api.github.com/users/jiangwy99/orgs", "repos_url": "https://api.github.com/users/jiangwy99/repos", "events_url": "https://api.github.com/users/jiangwy99/events{/privacy}", "received_events_url": "https://api.github.com/users/jiangwy99/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[ { "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)" ]
1,651,599,896,000
1,651,647,515,000
1,651,647,515,000
NONE
null
null
null
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4271/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4270/comments
https://api.github.com/repos/huggingface/datasets/issues/4270/events
https://github.com/huggingface/datasets/pull/4270
1,224,244,460
PR_kwDODunzps43PC5V
4,270
Fix style in openbookqa dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,591,294,000
1,651,826,286,000
1,651,594,852,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4270", "html_url": "https://github.com/huggingface/datasets/pull/4270", "diff_url": "https://github.com/huggingface/datasets/pull/4270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4270.patch", "merged_at": 1651594852000 }
CI in PR: - #4259 was green, but after merging it to master, a code quality error appeared.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4270/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4269/comments
https://api.github.com/repos/huggingface/datasets/issues/4269/events
https://github.com/huggingface/datasets/pull/4269
1,223,865,145
PR_kwDODunzps43Nzwh
4,269
Add license and point of contact to big_patent dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,569,847,000
1,651,826,289,000
1,651,576,579,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4269", "html_url": "https://github.com/huggingface/datasets/pull/4269", "diff_url": "https://github.com/huggingface/datasets/pull/4269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4269.patch", "merged_at": 1651576579000 }
Update metadata of big_patent dataset with: - license - point of contact
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4269/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4268/comments
https://api.github.com/repos/huggingface/datasets/issues/4268/events
https://github.com/huggingface/datasets/issues/4268
1,223,331,964
I_kwDODunzps5I6pB8
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https://en.wiktionary.org/wiki/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɜːd/\r\n([General American](https://en.wikipedia.org/wiki/General_American)) [enPR](https://en.wiktionary.org/wiki/Appendix:English_pronunciation): wûrd, [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɝd/", "Hi @i-am-neo, thanks for reporting.\r\n\r\nNormally this dataset should be private and not accessible for public use. @cakiki, @lvwerra, any reason why is it public? I see many other Wikimedia datasets are also public.\r\n\r\nAlso note that last commit \"Add metadata\" (https://huggingface.co/datasets/bigscience-catalogue-lm-data/lm_en_wiktionary_filtered/commit/dc2f458dab50e00f35c94efb3cd4009996858609) introduced buggy data files (`data/file-01.jsonl.gz.lock`, `data/file-01.jsonl.gz.lock.lock`). The same bug appears in other datasets as well.\r\n\r\n@i-am-neo, please note that in the near future we are planning to make public all datasets used for the BigScience project (at least all of them whose license allows to do that). Once public, they will be accessible for all the NLP community.", "Ah this must be a bug introduced at creation time since the repos were created programmatically; I'll go ahead and make them private; sorry about that!", "All datasets are private now. \r\n\r\nRe:that bug I think we're currently avoiding it by avoiding verifications. (i.e. `ignore_verifications=True`)", "Thanks a lot, @cakiki.\r\n\r\n@i-am-neo, I'm closing this issue for now because the dataset is not publicly available yet. Just stay tuned, as we will soon release all the BigScience open-license datasets. ", "Thanks for letting me know, @albertvillanova @cakiki.\r\nAny chance of having a subset alpha version in the meantime? \r\nI only need two dicts out of wiktionary: 1) phoneme(as key): word, and 2) word(as key): its phonemes.\r\n\r\nWould like to use it for a mini-poc [Robust ASR](https://github.com/huggingface/transformers/issues/13162#issuecomment-1096881290) decoding, cc @patrickvonplaten. \r\n\r\n(Patrick, possible to email you so as not to litter github with comments? I have some observations after experiments training hubert on some YT AMI-like data (11.44% wer). Also wonder if a robust ASR is on your/HG's roadmap). Thanks!", "Hey @i-am-neo,\r\n\r\nCool to hear that you're working on Robust ASR! Feel free to drop me a mail :-)", "@i-am-neo This particular subset of the dataset was taken from the [CirrusSearch dumps](https://dumps.wikimedia.org/other/cirrussearch/current/)\r\nYou're specifically after the [enwiktionary-20220425-cirrussearch-content.json.gz](https://dumps.wikimedia.org/other/cirrussearch/current/enwiktionary-20220425-cirrussearch-content.json.gz) file", "thanks @cakiki ! <del>I could access the gz file yesterday (but neglected to tuck it away somewhere safe), and today the link is throwing a 404. Can you help? </del> Never mind, got it!", "thanks @patrickvonplaten. will do - getting my observations together." ]
1,651,523,665,000
1,651,852,410,000
1,651,577,028,000
NONE
null
null
null
## Describe the bug Error generated when attempting to download dataset ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` ExpectedMoreDownloadedFiles Traceback (most recent call last) [<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") 3 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 31 return 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0: ---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0: 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4268/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4267/comments
https://api.github.com/repos/huggingface/datasets/issues/4267/events
https://github.com/huggingface/datasets/pull/4267
1,223,214,275
PR_kwDODunzps43LzOR
4,267
Replace data URL in SAMSum dataset within the same repository
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,516,688,000
1,651,826,293,000
1,651,518,229,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4267", "html_url": "https://github.com/huggingface/datasets/pull/4267", "diff_url": "https://github.com/huggingface/datasets/pull/4267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4267.patch", "merged_at": 1651518229000 }
Replace data URL with one in the same repository.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4267/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4266/comments
https://api.github.com/repos/huggingface/datasets/issues/4266/events
https://github.com/huggingface/datasets/pull/4266
1,223,116,436
PR_kwDODunzps43LeXK
4,266
Add HF Speech Bench to Librispeech Dataset Card
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,510,771,000
1,651,740,440,000
1,651,740,009,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4266", "html_url": "https://github.com/huggingface/datasets/pull/4266", "diff_url": "https://github.com/huggingface/datasets/pull/4266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4266.patch", "merged_at": 1651740009000 }
Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) through someone with permissions? cc @patrickvonplaten: more leaderboard promotion!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4266/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4263/comments
https://api.github.com/repos/huggingface/datasets/issues/4263/events
https://github.com/huggingface/datasets/pull/4263
1,222,723,083
PR_kwDODunzps43KLnD
4,263
Rename imagenet2012 -> imagenet-1k
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Later we can add imagenet-21k as a new dataset if we want.\r\n\r\nisn't it what models refer to as `imagenet` already?", "> isn't it what models refer to as imagenet already?\r\n\r\nI wasn't sure, but it looks like it indeed. Therefore having a dataset `imagenet` for ImageNet 21k makes sense actually.\r\n\r\nEDIT: actually not all `imagenet` tag refer to ImageNet 21k - we will need to correct some of them", "_The documentation is not available anymore as the PR was closed or merged._", "should we remove the repo mirror on the hub side or will you do it?" ]
1,651,487,181,000
1,651,513,846,000
1,651,509,177,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4263", "html_url": "https://github.com/huggingface/datasets/pull/4263", "diff_url": "https://github.com/huggingface/datasets/pull/4263.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4263.patch", "merged_at": 1651509177000 }
On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags. To correctly link models to imagenet, we should rename this dataset `imagenet-1k`. Later we can add `imagenet-21k` as a new dataset if we want. Once this one is merged we can delete the `imagenet2012` dataset repository on the Hub. EDIT: to complete the rationale on why we should name it `imagenet-1k`: If users specifically added the tag `imagenet-1k` , then it could be for two reasons (not sure which one is predominant), either they - wanted to make it explicit that it’s not 21k -> the distinction is important for the community - or they have been following this convention from other models -> the convention implicitly exists already
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4263/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4263/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4262/comments
https://api.github.com/repos/huggingface/datasets/issues/4262/events
https://github.com/huggingface/datasets/pull/4262
1,222,130,749
PR_kwDODunzps43IOye
4,262
Add YAML tags to Dataset Card rotten tomatoes
{ "login": "mo6zes", "id": 10004251, "node_id": "MDQ6VXNlcjEwMDA0MjUx", "avatar_url": "https://avatars.githubusercontent.com/u/10004251?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mo6zes", "html_url": "https://github.com/mo6zes", "followers_url": "https://api.github.com/users/mo6zes/followers", "following_url": "https://api.github.com/users/mo6zes/following{/other_user}", "gists_url": "https://api.github.com/users/mo6zes/gists{/gist_id}", "starred_url": "https://api.github.com/users/mo6zes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mo6zes/subscriptions", "organizations_url": "https://api.github.com/users/mo6zes/orgs", "repos_url": "https://api.github.com/users/mo6zes/repos", "events_url": "https://api.github.com/users/mo6zes/events{/privacy}", "received_events_url": "https://api.github.com/users/mo6zes/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,651,406,348,000
1,651,588,053,000
1,651,587,635,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4262", "html_url": "https://github.com/huggingface/datasets/pull/4262", "diff_url": "https://github.com/huggingface/datasets/pull/4262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4262.patch", "merged_at": 1651587635000 }
The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4262/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4262/timeline
null
true

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
3
Add dataset card