url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.16B
1.34B
| node_id
stringlengths 18
19
| number
int64 3.81k
4.82k
| title
stringlengths 1
162
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
null | assignees
sequence | milestone
null | comments
sequence | created_at
int64 1,646B
1,660B
| updated_at
int64 1,646B
1,660B
| closed_at
int64 1,646B
1,660B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 9
19.5k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4820/comments | https://api.github.com/repos/huggingface/datasets/issues/4820/events | https://github.com/huggingface/datasets/issues/4820 | 1,335,117,132 | I_kwDODunzps5PlEVM | 4,820 | Terminating: fork() called from a process already using GNU OpenMP, this is unsafe. | {
"login": "talhaanwarch",
"id": 37379131,
"node_id": "MDQ6VXNlcjM3Mzc5MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/37379131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talhaanwarch",
"html_url": "https://github.com/talhaanwarch",
"followers_url": "https://api.github.com/users/talhaanwarch/followers",
"following_url": "https://api.github.com/users/talhaanwarch/following{/other_user}",
"gists_url": "https://api.github.com/users/talhaanwarch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talhaanwarch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talhaanwarch/subscriptions",
"organizations_url": "https://api.github.com/users/talhaanwarch/orgs",
"repos_url": "https://api.github.com/users/talhaanwarch/repos",
"events_url": "https://api.github.com/users/talhaanwarch/events{/privacy}",
"received_events_url": "https://api.github.com/users/talhaanwarch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Fixed by installing either resampy<3 or resampy>=4"
] | 1,660,160,553,000 | 1,660,161,190,000 | 1,660,161,190,000 | NONE | null | Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
There is no other logs available, so i have no clue what is the cause of it.
```
def prepare_dataset(batch):
audio = batch["path"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["text"]).input_ids
return batch
data = data.map(prepare_dataset, remove_columns=data.column_names["train"],
num_proc=4)
```
Specify the actual results or traceback.
There is no traceback except
`Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4820/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4819/comments | https://api.github.com/repos/huggingface/datasets/issues/4819/events | https://github.com/huggingface/datasets/pull/4819 | 1,335,064,449 | PR_kwDODunzps48-xc6 | 4,819 | Add missing language tags to resources | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,158,402,000 | 1,660,160,749,000 | 1,660,159,935,000 | MEMBER | null | Add missing language tags to resources, required by existing datasets on GitHub. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4819/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4819",
"html_url": "https://github.com/huggingface/datasets/pull/4819",
"diff_url": "https://github.com/huggingface/datasets/pull/4819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4819.patch",
"merged_at": 1660159935000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4818/comments | https://api.github.com/repos/huggingface/datasets/issues/4818/events | https://github.com/huggingface/datasets/pull/4818 | 1,334,941,810 | PR_kwDODunzps48-W7a | 4,818 | Add add cc-by-sa-2.5 license tag | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4818). All of your documentation changes will be reflected on that endpoint."
] | 1,660,151,919,000 | 1,660,154,101,000 | null | CONTRIBUTOR | null | - [ ] add it to moon-landing
- [ ] add it to hub-docs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4818/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4818",
"html_url": "https://github.com/huggingface/datasets/pull/4818",
"diff_url": "https://github.com/huggingface/datasets/pull/4818.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4818.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4817/comments | https://api.github.com/repos/huggingface/datasets/issues/4817/events | https://github.com/huggingface/datasets/issues/4817 | 1,334,572,163 | I_kwDODunzps5Pi_SD | 4,817 | Outdated Link for mkqa Dataset | {
"login": "liaeh",
"id": 52380283,
"node_id": "MDQ6VXNlcjUyMzgwMjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/52380283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liaeh",
"html_url": "https://github.com/liaeh",
"followers_url": "https://api.github.com/users/liaeh/followers",
"following_url": "https://api.github.com/users/liaeh/following{/other_user}",
"gists_url": "https://api.github.com/users/liaeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liaeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liaeh/subscriptions",
"organizations_url": "https://api.github.com/users/liaeh/orgs",
"repos_url": "https://api.github.com/users/liaeh/repos",
"events_url": "https://api.github.com/users/liaeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/liaeh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,660,135,545,000 | 1,660,135,545,000 | null | NONE | null | ## Describe the bug
The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main).
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mkqa")
```
## Expected results
downloads the dataset
## Actual results
```python
Downloading builder script:
4.79k/? [00:00<00:00, 201kB/s]
Downloading metadata:
13.2k/? [00:00<00:00, 504kB/s]
Downloading and preparing dataset mkqa/mkqa (download: 11.35 MiB, generated: 34.29 MiB, post-processed: Unknown size, total: 45.65 MiB) to /home/lhr/.cache/huggingface/datasets/mkqa/mkqa/1.0.0/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d...
Downloading data files: 0%
0/1 [00:00<?, ?it/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [3], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("mkqa")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1745 # Download and prepare data
-> 1746 builder_instance.download_and_prepare(
1747 download_config=download_config,
1748 download_mode=download_mode,
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
1751 use_auth_token=use_auth_token,
1752 )
1754 # Build dataset for splits
1755 keep_in_memory = (
1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1757 )
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mkqa/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d/mkqa.py:130, in Mkqa._split_generators(self, dl_manager)
128 # download and extract URLs
129 urls_to_download = _URLS
--> 130 downloaded_files = dl_manager.download_and_extract(urls_to_download)
132 return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls)
415 def download_and_extract(self, url_or_urls):
416 """Download and extract given url_or_urls.
417
418 Is roughly equivalent to:
(...)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:309, in DownloadManager.download(self, url_or_urls)
306 download_func = partial(self._download, download_config=download_config)
308 start_time = datetime.now()
--> 309 downloaded_path_or_paths = map_nested(
310 download_func,
311 url_or_urls,
312 map_tuple=True,
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
318 logger.info(f"Downloading took {duration.total_seconds() // 60} min")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:393, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
--> 393 mapped = [
394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:394, in <listcomp>(.0)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
393 mapped = [
--> 394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:330, in _single_map_nested(args)
328 # Singleton first to spare some computation
329 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 330 return function(data_struct)
332 # Reduce logging to keep things readable in multiprocessing with tqdm
333 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:335, in DownloadManager._download(self, url_or_filename, download_config)
332 if is_relative_path(url_or_filename):
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:185, in cached_path(url_or_filename, download_config, **download_kwargs)
181 url_or_filename = str(url_or_filename)
183 if is_remote_url(url_or_filename):
184 # URL, so get it from the cache (downloading if necessary)
--> 185 output_path = get_from_cache(
186 url_or_filename,
187 cache_dir=cache_dir,
188 force_download=download_config.force_download,
189 proxies=download_config.proxies,
190 resume_download=download_config.resume_download,
191 user_agent=download_config.user_agent,
192 local_files_only=download_config.local_files_only,
193 use_etag=download_config.use_etag,
194 max_retries=download_config.max_retries,
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
200 # File, and it exists.
201 output_path = url_or_filename
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:530, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
525 raise FileNotFoundError(
526 f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
527 " disabled. To enable file online look-ups, set 'local_files_only' to False."
528 )
529 elif response is not None and response.status_code == 404:
--> 530 raise FileNotFoundError(f"Couldn't find file at {url}")
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
FileNotFoundError: Couldn't find file at https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4817/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4816/comments | https://api.github.com/repos/huggingface/datasets/issues/4816/events | https://github.com/huggingface/datasets/pull/4816 | 1,334,099,454 | PR_kwDODunzps487kpq | 4,816 | Update version of opus_paracrawl dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4816). All of your documentation changes will be reflected on that endpoint."
] | 1,660,109,984,000 | 1,660,110,803,000 | null | MEMBER | null | This PR updates OPUS ParaCrawl from 7.1 to 9 version.
Fix #4815. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4816/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4816",
"html_url": "https://github.com/huggingface/datasets/pull/4816",
"diff_url": "https://github.com/huggingface/datasets/pull/4816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4816.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4815/comments | https://api.github.com/repos/huggingface/datasets/issues/4815/events | https://github.com/huggingface/datasets/issues/4815 | 1,334,078,303 | I_kwDODunzps5PhGtf | 4,815 | Outdated loading script for OPUS ParaCrawl dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,660,108,354,000 | 1,660,108,354,000 | null | MEMBER | null | ## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4815/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4814/comments | https://api.github.com/repos/huggingface/datasets/issues/4814/events | https://github.com/huggingface/datasets/issues/4814 | 1,333,356,230 | I_kwDODunzps5PeWbG | 4,814 | Support CSV as metadata file format in AudioFolder/ImageFolder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,660,055,809,000 | 1,660,055,809,000 | null | CONTRIBUTOR | null | Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4814/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4813/comments | https://api.github.com/repos/huggingface/datasets/issues/4813/events | https://github.com/huggingface/datasets/pull/4813 | 1,333,287,756 | PR_kwDODunzps48446r | 4,813 | Fix loading example in opus dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,052,858,000 | 1,660,067,535,000 | 1,660,066,698,000 | MEMBER | null | This PR:
- fixes the examples to load the datasets, with the corrected dataset name, in their dataset cards for:
- opus_dgt
- opus_paracrawl
- opus_wikipedia
- fixes their dataset cards with the missing required information: title, data instances/fields/splits
- enumerates the supported languages
- adds a missing citation reference for opus_wikipedia
Related to:
- #4806 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4813/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4813",
"html_url": "https://github.com/huggingface/datasets/pull/4813",
"diff_url": "https://github.com/huggingface/datasets/pull/4813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4813.patch",
"merged_at": 1660066698000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4812/comments | https://api.github.com/repos/huggingface/datasets/issues/4812/events | https://github.com/huggingface/datasets/pull/4812 | 1,333,051,730 | PR_kwDODunzps484Fzq | 4,812 | Fix bug in function validate_type for Python >= 3.9 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4812). All of your documentation changes will be reflected on that endpoint."
] | 1,660,041,162,000 | 1,660,041,622,000 | null | MEMBER | null | Fix `validate_type` function, so that it uses `get_origin` instead. This makes the function forward compatible.
This fixes #4811 because:
```python
In [4]: typing.Optional[str]
Out[4]: typing.Optional[str]
In [5]: get_origin(typing.Optional[str])
Out[5]: typing.Union
```
Fix #4811. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4812/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4812",
"html_url": "https://github.com/huggingface/datasets/pull/4812",
"diff_url": "https://github.com/huggingface/datasets/pull/4812.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4812.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4811/comments | https://api.github.com/repos/huggingface/datasets/issues/4811/events | https://github.com/huggingface/datasets/issues/4811 | 1,333,043,421 | I_kwDODunzps5PdKDd | 4,811 | Bug in function validate_type for Python >= 3.9 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,660,040,721,000 | 1,660,040,722,000 | null | MEMBER | null | ## Describe the bug
The function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.
```python
In [4]: typing.Optional[str]
Out[4]: typing.Union[str, NoneType]
```
However, this is not the case for Python 3.9:
```python
In [3]: typing.Optional[str]
Out[3]: typing.Optional[str]
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4811/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4820/comments | https://api.github.com/repos/huggingface/datasets/issues/4820/events | https://github.com/huggingface/datasets/issues/4820 | 1,335,117,132 | I_kwDODunzps5PlEVM | 4,820 | Terminating: fork() called from a process already using GNU OpenMP, this is unsafe. | {
"login": "talhaanwarch",
"id": 37379131,
"node_id": "MDQ6VXNlcjM3Mzc5MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/37379131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talhaanwarch",
"html_url": "https://github.com/talhaanwarch",
"followers_url": "https://api.github.com/users/talhaanwarch/followers",
"following_url": "https://api.github.com/users/talhaanwarch/following{/other_user}",
"gists_url": "https://api.github.com/users/talhaanwarch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talhaanwarch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talhaanwarch/subscriptions",
"organizations_url": "https://api.github.com/users/talhaanwarch/orgs",
"repos_url": "https://api.github.com/users/talhaanwarch/repos",
"events_url": "https://api.github.com/users/talhaanwarch/events{/privacy}",
"received_events_url": "https://api.github.com/users/talhaanwarch/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Fixed by installing either resampy<3 or resampy>=4"
] | 1,660,160,553,000 | 1,660,161,190,000 | 1,660,161,190,000 | NONE | null | Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
There is no other logs available, so i have no clue what is the cause of it.
```
def prepare_dataset(batch):
audio = batch["path"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["text"]).input_ids
return batch
data = data.map(prepare_dataset, remove_columns=data.column_names["train"],
num_proc=4)
```
Specify the actual results or traceback.
There is no traceback except
`Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4820/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4819/comments | https://api.github.com/repos/huggingface/datasets/issues/4819/events | https://github.com/huggingface/datasets/pull/4819 | 1,335,064,449 | PR_kwDODunzps48-xc6 | 4,819 | Add missing language tags to resources | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,158,402,000 | 1,660,160,749,000 | 1,660,159,935,000 | MEMBER | null | Add missing language tags to resources, required by existing datasets on GitHub. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4819/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4819",
"html_url": "https://github.com/huggingface/datasets/pull/4819",
"diff_url": "https://github.com/huggingface/datasets/pull/4819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4819.patch",
"merged_at": 1660159935000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4818/comments | https://api.github.com/repos/huggingface/datasets/issues/4818/events | https://github.com/huggingface/datasets/pull/4818 | 1,334,941,810 | PR_kwDODunzps48-W7a | 4,818 | Add add cc-by-sa-2.5 license tag | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4818). All of your documentation changes will be reflected on that endpoint."
] | 1,660,151,919,000 | 1,660,154,101,000 | null | CONTRIBUTOR | null | - [ ] add it to moon-landing
- [ ] add it to hub-docs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4818/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4818",
"html_url": "https://github.com/huggingface/datasets/pull/4818",
"diff_url": "https://github.com/huggingface/datasets/pull/4818.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4818.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4817/comments | https://api.github.com/repos/huggingface/datasets/issues/4817/events | https://github.com/huggingface/datasets/issues/4817 | 1,334,572,163 | I_kwDODunzps5Pi_SD | 4,817 | Outdated Link for mkqa Dataset | {
"login": "liaeh",
"id": 52380283,
"node_id": "MDQ6VXNlcjUyMzgwMjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/52380283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liaeh",
"html_url": "https://github.com/liaeh",
"followers_url": "https://api.github.com/users/liaeh/followers",
"following_url": "https://api.github.com/users/liaeh/following{/other_user}",
"gists_url": "https://api.github.com/users/liaeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liaeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liaeh/subscriptions",
"organizations_url": "https://api.github.com/users/liaeh/orgs",
"repos_url": "https://api.github.com/users/liaeh/repos",
"events_url": "https://api.github.com/users/liaeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/liaeh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,660,135,545,000 | 1,660,135,545,000 | null | NONE | null | ## Describe the bug
The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main).
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mkqa")
```
## Expected results
downloads the dataset
## Actual results
```python
Downloading builder script:
4.79k/? [00:00<00:00, 201kB/s]
Downloading metadata:
13.2k/? [00:00<00:00, 504kB/s]
Downloading and preparing dataset mkqa/mkqa (download: 11.35 MiB, generated: 34.29 MiB, post-processed: Unknown size, total: 45.65 MiB) to /home/lhr/.cache/huggingface/datasets/mkqa/mkqa/1.0.0/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d...
Downloading data files: 0%
0/1 [00:00<?, ?it/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [3], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("mkqa")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1745 # Download and prepare data
-> 1746 builder_instance.download_and_prepare(
1747 download_config=download_config,
1748 download_mode=download_mode,
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
1751 use_auth_token=use_auth_token,
1752 )
1754 # Build dataset for splits
1755 keep_in_memory = (
1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1757 )
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mkqa/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d/mkqa.py:130, in Mkqa._split_generators(self, dl_manager)
128 # download and extract URLs
129 urls_to_download = _URLS
--> 130 downloaded_files = dl_manager.download_and_extract(urls_to_download)
132 return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls)
415 def download_and_extract(self, url_or_urls):
416 """Download and extract given url_or_urls.
417
418 Is roughly equivalent to:
(...)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:309, in DownloadManager.download(self, url_or_urls)
306 download_func = partial(self._download, download_config=download_config)
308 start_time = datetime.now()
--> 309 downloaded_path_or_paths = map_nested(
310 download_func,
311 url_or_urls,
312 map_tuple=True,
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
318 logger.info(f"Downloading took {duration.total_seconds() // 60} min")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:393, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
--> 393 mapped = [
394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:394, in <listcomp>(.0)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
393 mapped = [
--> 394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:330, in _single_map_nested(args)
328 # Singleton first to spare some computation
329 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 330 return function(data_struct)
332 # Reduce logging to keep things readable in multiprocessing with tqdm
333 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:335, in DownloadManager._download(self, url_or_filename, download_config)
332 if is_relative_path(url_or_filename):
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:185, in cached_path(url_or_filename, download_config, **download_kwargs)
181 url_or_filename = str(url_or_filename)
183 if is_remote_url(url_or_filename):
184 # URL, so get it from the cache (downloading if necessary)
--> 185 output_path = get_from_cache(
186 url_or_filename,
187 cache_dir=cache_dir,
188 force_download=download_config.force_download,
189 proxies=download_config.proxies,
190 resume_download=download_config.resume_download,
191 user_agent=download_config.user_agent,
192 local_files_only=download_config.local_files_only,
193 use_etag=download_config.use_etag,
194 max_retries=download_config.max_retries,
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
200 # File, and it exists.
201 output_path = url_or_filename
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:530, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
525 raise FileNotFoundError(
526 f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
527 " disabled. To enable file online look-ups, set 'local_files_only' to False."
528 )
529 elif response is not None and response.status_code == 404:
--> 530 raise FileNotFoundError(f"Couldn't find file at {url}")
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
FileNotFoundError: Couldn't find file at https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4817/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4816/comments | https://api.github.com/repos/huggingface/datasets/issues/4816/events | https://github.com/huggingface/datasets/pull/4816 | 1,334,099,454 | PR_kwDODunzps487kpq | 4,816 | Update version of opus_paracrawl dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4816). All of your documentation changes will be reflected on that endpoint."
] | 1,660,109,984,000 | 1,660,110,803,000 | null | MEMBER | null | This PR updates OPUS ParaCrawl from 7.1 to 9 version.
Fix #4815. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4816/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4816",
"html_url": "https://github.com/huggingface/datasets/pull/4816",
"diff_url": "https://github.com/huggingface/datasets/pull/4816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4816.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4815/comments | https://api.github.com/repos/huggingface/datasets/issues/4815/events | https://github.com/huggingface/datasets/issues/4815 | 1,334,078,303 | I_kwDODunzps5PhGtf | 4,815 | Outdated loading script for OPUS ParaCrawl dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,660,108,354,000 | 1,660,108,354,000 | null | MEMBER | null | ## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4815/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4814/comments | https://api.github.com/repos/huggingface/datasets/issues/4814/events | https://github.com/huggingface/datasets/issues/4814 | 1,333,356,230 | I_kwDODunzps5PeWbG | 4,814 | Support CSV as metadata file format in AudioFolder/ImageFolder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,660,055,809,000 | 1,660,055,809,000 | null | CONTRIBUTOR | null | Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4814/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4813/comments | https://api.github.com/repos/huggingface/datasets/issues/4813/events | https://github.com/huggingface/datasets/pull/4813 | 1,333,287,756 | PR_kwDODunzps48446r | 4,813 | Fix loading example in opus dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,660,052,858,000 | 1,660,067,535,000 | 1,660,066,698,000 | MEMBER | null | This PR:
- fixes the examples to load the datasets, with the corrected dataset name, in their dataset cards for:
- opus_dgt
- opus_paracrawl
- opus_wikipedia
- fixes their dataset cards with the missing required information: title, data instances/fields/splits
- enumerates the supported languages
- adds a missing citation reference for opus_wikipedia
Related to:
- #4806 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4813/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4813",
"html_url": "https://github.com/huggingface/datasets/pull/4813",
"diff_url": "https://github.com/huggingface/datasets/pull/4813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4813.patch",
"merged_at": 1660066698000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4812/comments | https://api.github.com/repos/huggingface/datasets/issues/4812/events | https://github.com/huggingface/datasets/pull/4812 | 1,333,051,730 | PR_kwDODunzps484Fzq | 4,812 | Fix bug in function validate_type for Python >= 3.9 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4812). All of your documentation changes will be reflected on that endpoint."
] | 1,660,041,162,000 | 1,660,041,622,000 | null | MEMBER | null | Fix `validate_type` function, so that it uses `get_origin` instead. This makes the function forward compatible.
This fixes #4811 because:
```python
In [4]: typing.Optional[str]
Out[4]: typing.Optional[str]
In [5]: get_origin(typing.Optional[str])
Out[5]: typing.Union
```
Fix #4811. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4812/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4812",
"html_url": "https://github.com/huggingface/datasets/pull/4812",
"diff_url": "https://github.com/huggingface/datasets/pull/4812.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4812.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4811/comments | https://api.github.com/repos/huggingface/datasets/issues/4811/events | https://github.com/huggingface/datasets/issues/4811 | 1,333,043,421 | I_kwDODunzps5PdKDd | 4,811 | Bug in function validate_type for Python >= 3.9 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,660,040,721,000 | 1,660,040,722,000 | null | MEMBER | null | ## Describe the bug
The function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.
```python
In [4]: typing.Optional[str]
Out[4]: typing.Union[str, NoneType]
```
However, this is not the case for Python 3.9:
```python
In [3]: typing.Optional[str]
Out[3]: typing.Optional[str]
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4811/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4810/comments | https://api.github.com/repos/huggingface/datasets/issues/4810/events | https://github.com/huggingface/datasets/pull/4810 | 1,333,038,702 | PR_kwDODunzps484C9l | 4,810 | hellaswag: add non-empty description to fix metadata issue | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4810). All of your documentation changes will be reflected on that endpoint."
] | 1,660,040,474,000 | 1,660,043,477,000 | null | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4810/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4810",
"html_url": "https://github.com/huggingface/datasets/pull/4810",
"diff_url": "https://github.com/huggingface/datasets/pull/4810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4810.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4809/comments | https://api.github.com/repos/huggingface/datasets/issues/4809/events | https://github.com/huggingface/datasets/pull/4809 | 1,332,842,747 | PR_kwDODunzps483Y4h | 4,809 | Complete the mlqa dataset card | {
"login": "eldhoittangeorge",
"id": 7940237,
"node_id": "MDQ6VXNlcjc5NDAyMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7940237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldhoittangeorge",
"html_url": "https://github.com/eldhoittangeorge",
"followers_url": "https://api.github.com/users/eldhoittangeorge/followers",
"following_url": "https://api.github.com/users/eldhoittangeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/eldhoittangeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldhoittangeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldhoittangeorge/subscriptions",
"organizations_url": "https://api.github.com/users/eldhoittangeorge/orgs",
"repos_url": "https://api.github.com/users/eldhoittangeorge/repos",
"events_url": "https://api.github.com/users/eldhoittangeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldhoittangeorge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for your contribution, @eldhoittangeorge.\r\n> \r\n> The CI error message: https://github.com/huggingface/datasets/runs/7743526624?check_suite_focus=true\r\n> \r\n> ```\r\n> E ValueError: The following issues have been found in the dataset cards:\r\n> E YAML tags:\r\n> E __init__() missing 5 required positional arguments: 'annotations_creators', 'language_creators', 'license', 'size_categories', and 'source_datasets'\r\n> ```\r\n\r\nI will fix the CI error.",
"@eldhoittangeorge, thanks again for all the fixes. Just a minor one before we can merge this PR: https://github.com/huggingface/datasets/runs/7744885754?check_suite_focus=true\r\n```\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language_creators':\r\nE \t['unknown'] are not registered tags for 'language_creators', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/creators.json\r\n```",
"> \r\n\r\nThanks, I updated the file. \r\nA small suggestion can you mention this link https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/ in the contribution page. So that others will know the acceptable values for the tags."
] | 1,660,030,686,000 | 1,660,062,381,000 | 1,660,051,603,000 | CONTRIBUTOR | null | I fixed the issue #4808
Details of PR:
- Added languages included in the dataset.
- Added task id and task category.
- Updated the citation information.
Fix #4808. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4809/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4809",
"html_url": "https://github.com/huggingface/datasets/pull/4809",
"diff_url": "https://github.com/huggingface/datasets/pull/4809.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4809.patch",
"merged_at": 1660051603000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4808/comments | https://api.github.com/repos/huggingface/datasets/issues/4808/events | https://github.com/huggingface/datasets/issues/4808 | 1,332,840,217 | I_kwDODunzps5PcYcZ | 4,808 | Add more information to the dataset card of mlqa dataset | {
"login": "eldhoittangeorge",
"id": 7940237,
"node_id": "MDQ6VXNlcjc5NDAyMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7940237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldhoittangeorge",
"html_url": "https://github.com/eldhoittangeorge",
"followers_url": "https://api.github.com/users/eldhoittangeorge/followers",
"following_url": "https://api.github.com/users/eldhoittangeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/eldhoittangeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldhoittangeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldhoittangeorge/subscriptions",
"organizations_url": "https://api.github.com/users/eldhoittangeorge/orgs",
"repos_url": "https://api.github.com/users/eldhoittangeorge/repos",
"events_url": "https://api.github.com/users/eldhoittangeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldhoittangeorge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [
{
"login": "eldhoittangeorge",
"id": 7940237,
"node_id": "MDQ6VXNlcjc5NDAyMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7940237?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldhoittangeorge",
"html_url": "https://github.com/eldhoittangeorge",
"followers_url": "https://api.github.com/users/eldhoittangeorge/followers",
"following_url": "https://api.github.com/users/eldhoittangeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/eldhoittangeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldhoittangeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldhoittangeorge/subscriptions",
"organizations_url": "https://api.github.com/users/eldhoittangeorge/orgs",
"repos_url": "https://api.github.com/users/eldhoittangeorge/repos",
"events_url": "https://api.github.com/users/eldhoittangeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldhoittangeorge/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign",
"Fixed by:\r\n- #4809"
] | 1,660,030,542,000 | 1,660,052,003,000 | 1,660,052,003,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4808/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4807/comments | https://api.github.com/repos/huggingface/datasets/issues/4807/events | https://github.com/huggingface/datasets/pull/4807 | 1,332,784,110 | PR_kwDODunzps483MSH | 4,807 | document fix in opus_gnome dataset | {
"login": "gojiteji",
"id": 38291975,
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gojiteji",
"html_url": "https://github.com/gojiteji",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Duplicate:\r\n- #4806 "
] | 1,660,027,093,000 | 1,660,030,083,000 | 1,660,030,083,000 | CONTRIBUTOR | null | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4807/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4807",
"html_url": "https://github.com/huggingface/datasets/pull/4807",
"diff_url": "https://github.com/huggingface/datasets/pull/4807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4807.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4806/comments | https://api.github.com/repos/huggingface/datasets/issues/4806/events | https://github.com/huggingface/datasets/pull/4806 | 1,332,664,038 | PR_kwDODunzps482yiS | 4,806 | Fix opus_gnome dataset card | {
"login": "gojiteji",
"id": 38291975,
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gojiteji",
"html_url": "https://github.com/gojiteji",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@gojiteji why have you closed this PR and created an identical one?\r\n- #4807 ",
"@albertvillanova \r\nI forgot to follow \"How to create a Pull\" in CONTRIBUTING.md in this branch.",
"Both are identical. And you can push additional commits to this branch.",
"I see. Thank you for your comment.",
"Anyway, @gojiteji thanks for your contribution and this fix.",
"Once you have modified the `opus_gnome` dataset card, our Continuous Integration test suite performs some tests on it that make some additional requirements: the errors that appear have nothing to do with your contribution, but with these additional quality requirements.",
"> the errors that appear have nothing to do with your contribution, but with these additional quality requirements.\r\n\r\nIs there anything I should do?",
"If you would like to address them as well in this PR, it would be awesome: https://github.com/huggingface/datasets/runs/7741104780?check_suite_focus=true\r\n",
"These are the 2 error messages:\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/opus_gnome/README.md`:\r\nE -\tNo first-level heading starting with `Dataset Card for` found in README. Skipping further validation for this README.\r\n\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language':\r\nE \t['ara', 'cat', 'foo', 'gr', 'nqo', 'tmp'] are not registered tags for 'language', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/languages.json\r\n```",
"In principle there are 2 errors:\r\n\r\nThe first one says, the title of the README does not start with `Dataset Card for`:\r\n- The README title is: `# Dataset Card Creation Guide`\r\n- According to the [template here](https://github.com/huggingface/datasets/blob/main/templates/README.md), it should be: `# Dataset Card for [Dataset Name]`",
"In relation with the languages:\r\n- you should check whether the language codes are properly spelled\r\n- and if so, adding them to our `languages.json` file, so that they are properly validated",
"Thank you for the detailed information. I'm checking it now.",
"```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/opus_gnome/README.md`:\r\nE -\tExpected some content in section `Data Instances` but it is empty.\r\nE -\tExpected some content in section `Data Fields` but it is empty.\r\nE -\tExpected some content in section `Data Splits` but it is empty.\r\n```",
"I added `ara`, `cat`, `gr`, and `nqo` to `languages.json` and removed `foo` and `tmp` from `README.md`.\r\nI also write Data Instances, Data Fields, and Data Splits in `README.md`.",
"Thanks for your investigation and fixes to the dataset card structure! I'm just making some suggestions before merging this PR: see below.",
"Should I create PR for `config.json` to add ` ara cat gr nqo` first?\r\nI think I can pass this failing after that.\r\n\r\nOr removing `ara, cat, gr, nqo, foo, tmp` from `README.md`. ",
"Once you address these issues, all the CI tests will pass.",
"Once the remaining changes are addressed (see unresolved above), we will be able to merge this:\r\n- [ ] Remove \"ara\" from README\r\n- [ ] Remove \"cat\" from README\r\n- [ ] Remove \"gr\" from README\r\n- [ ] Replace \"tmp\" with \"tyj\" in README\r\n- [ ] Add \"tyj\" to `languages.json`:\r\n ```\r\n \"tyj\": \"Tai Do; Tai Yo\",",
"I did the five changes."
] | 1,660,016,415,000 | 1,660,046,806,000 | 1,660,045,924,000 | CONTRIBUTOR | null | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
Fix #4805 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4806/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4806/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4806",
"html_url": "https://github.com/huggingface/datasets/pull/4806",
"diff_url": "https://github.com/huggingface/datasets/pull/4806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4806.patch",
"merged_at": 1660045924000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4805/comments | https://api.github.com/repos/huggingface/datasets/issues/4805/events | https://github.com/huggingface/datasets/issues/4805 | 1,332,653,531 | I_kwDODunzps5Pbq3b | 4,805 | Wrong example in opus_gnome dataset card | {
"login": "gojiteji",
"id": 38291975,
"node_id": "MDQ6VXNlcjM4MjkxOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/38291975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gojiteji",
"html_url": "https://github.com/gojiteji",
"followers_url": "https://api.github.com/users/gojiteji/followers",
"following_url": "https://api.github.com/users/gojiteji/following{/other_user}",
"gists_url": "https://api.github.com/users/gojiteji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gojiteji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gojiteji/subscriptions",
"organizations_url": "https://api.github.com/users/gojiteji/orgs",
"repos_url": "https://api.github.com/users/gojiteji/repos",
"events_url": "https://api.github.com/users/gojiteji/events{/privacy}",
"received_events_url": "https://api.github.com/users/gojiteji/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,660,015,287,000 | 1,660,045,925,000 | 1,660,045,925,000 | CONTRIBUTOR | null | ## Describe the bug
I found that [the example on opus_gone dataset ](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary) doesn't work.
## Steps to reproduce the bug
```python
load_dataset("gnome", lang1="it", lang2="pl")
```
`"gnome"` should be `"opus_gnome"`
## Expected results
```bash
100%
1/1 [00:00<00:00, 42.09it/s]
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 8368
})
})
```
## Actual results
```bash
Couldn't find 'gnome' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/gnome/gnome.py
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4805/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4804/comments | https://api.github.com/repos/huggingface/datasets/issues/4804/events | https://github.com/huggingface/datasets/issues/4804 | 1,332,630,358 | I_kwDODunzps5PblNW | 4,804 | streaming dataset with concatenating splits raises an error | {
"login": "Bing-su",
"id": 37621276,
"node_id": "MDQ6VXNlcjM3NjIxMjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37621276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bing-su",
"html_url": "https://github.com/Bing-su",
"followers_url": "https://api.github.com/users/Bing-su/followers",
"following_url": "https://api.github.com/users/Bing-su/following{/other_user}",
"gists_url": "https://api.github.com/users/Bing-su/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bing-su/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bing-su/subscriptions",
"organizations_url": "https://api.github.com/users/Bing-su/orgs",
"repos_url": "https://api.github.com/users/Bing-su/repos",
"events_url": "https://api.github.com/users/Bing-su/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bing-su/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,660,012,916,000 | 1,660,012,916,000 | null | NONE | null | ## Describe the bug
streaming dataset with concatenating splits raises an error
## Steps to reproduce the bug
```python
from datasets import load_dataset
# no error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation")
```
```python
from datasets import load_dataset
# error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation", streaming=True)
```
```sh
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-a6ae02d63899>](https://localhost:8080/#) in <module>()
3 # error
4 repo = "nateraw/ade20k-tiny"
----> 5 dataset = load_dataset(repo, split="train+validation", streaming=True)
1 frames
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1030 splits_generator = splits_generators[split]
1031 else:
-> 1032 raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
1033
1034 # Create a dataset for each of the given splits
ValueError: Bad split: train+validation. Available splits: ['validation', 'train']
```
[Colab](https://colab.research.google.com/drive/1wMj08_0bym9jnGgByib4lsBPu8NCZBG9?usp=sharing)
## Expected results
load successfully or throws an error saying it is not supported.
## Actual results
above
## Environment info
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0 (windows11 x64)
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4804/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4803/comments | https://api.github.com/repos/huggingface/datasets/issues/4803/events | https://github.com/huggingface/datasets/issues/4803 | 1,332,079,562 | I_kwDODunzps5PZevK | 4,803 | Support `pipeline` argument in inspect.py functions | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,659,974,484,000 | 1,659,974,484,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
The `wikipedia` dataset requires a `pipeline` argument to build the list of splits:
https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L937
But this is currently not supported in `get_dataset_config_info`:
https://github.com/huggingface/datasets/blob/main/src/datasets/inspect.py#L373-L375
which is called by other functions, e.g. `get_dataset_split_names`.
**Additional context**
The dataset viewer is not working out-of-the-box on `wikipedia` for this reason:
https://huggingface.co/datasets/wikipedia/viewer
<img width="637" alt="Capture d’écran 2022-08-08 à 12 01 16" src="https://user-images.githubusercontent.com/1676121/183461838-5330783b-0269-4ba7-a999-314cde2023d8.png">
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4803/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4802/comments | https://api.github.com/repos/huggingface/datasets/issues/4802/events | https://github.com/huggingface/datasets/issues/4802 | 1,331,676,691 | I_kwDODunzps5PX8YT | 4,802 | `with_format` behavior is inconsistent on different datasets | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi! You can get a `torch.Tensor` if you do the following:\r\n```python\r\nraw = load_dataset(\"beans\", split=\"train\")\r\nraw = raw.select(range(100))\r\n\r\npreprocessor = AutoFeatureExtractor.from_pretrained(\"nateraw/vit-base-beans\")\r\n\r\nfrom datasets import Array3D\r\nfeatures = raw.features.copy()\r\nfeatures[\"pixel_values\"] = datasets.Array3D(shape=(3, 224, 224), dtype=\"float32\")\r\n\r\ndef preprocess_func(examples):\r\n imgs = [img.convert(\"RGB\") for img in examples[\"image\"]]\r\n return preprocessor(imgs)\r\n\r\ndata = raw.map(preprocess_func, batched=True, features=features)\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n\r\ndata = data.with_format(\"torch\", columns=[\"pixel_values\"])\r\n\r\nprint(type(data[0][\"pixel_values\"]))\r\n```\r\n\r\nThe reason for this \"inconsistency\" in the default case is the way PyArrow infers the type of multi-dim arrays (in this case, the `pixel_values` column). If the type is not specified manually, PyArrow assumes it is a dynamic-length sequence (it needs to know the type before writing the first batch to a cache file, and it can't be sure the array is fixed ahead of time; `ArrayXD` is our way of telling that the dims are fixed), so it already fails to convert the corresponding array to NumPy properly (you get an array of `np.object` arrays). And `with_format(\"torch\")` replaces NumPy arrays with Torch tensors, so this bad formatting propagates."
] | 1,659,955,294,000 | 1,660,063,749,000 | null | CONTRIBUTOR | null | ## Describe the bug
I found a case where `with_format` does not transform the dataset to the requested format.
## Steps to reproduce the bug
Run:
```python
from transformers import AutoTokenizer, AutoFeatureExtractor
from datasets import load_dataset
raw = load_dataset("glue", "sst2", split="train")
raw = raw.select(range(100))
tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled")
def preprocess_func(examples):
return tokenizer(examples["sentence"], padding=True, max_length=256, truncation=True)
data = raw.map(preprocess_func, batched=True)
print(type(data[0]["input_ids"]))
data = data.with_format("torch", columns=["input_ids"])
print(type(data[0]["input_ids"]))
```
printing as expected:
```python
<class 'list'>
<class 'torch.Tensor'>
```
Then run:
```python
raw = load_dataset("beans", split="train")
raw = raw.select(range(100))
preprocessor = AutoFeatureExtractor.from_pretrained("nateraw/vit-base-beans")
def preprocess_func(examples):
imgs = [img.convert("RGB") for img in examples["image"]]
return preprocessor(imgs)
data = raw.map(preprocess_func, batched=True)
print(type(data[0]["pixel_values"]))
data = data.with_format("torch", columns=["pixel_values"])
print(type(data[0]["pixel_values"]))
```
Printing, unexpectedly
```python
<class 'list'>
<class 'list'>
```
## Expected results
`with_format` should transform into the requested format; it's not the case.
## Actual results
`type(data[0]["pixel_values"])` should be `torch.Tensor` in the example above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: dev version, commit 44af3fafb527302282f6b6507b952de7435f0979
- Platform: Linux
- Python version: 3.9.12
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4802/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4801/comments | https://api.github.com/repos/huggingface/datasets/issues/4801/events | https://github.com/huggingface/datasets/pull/4801 | 1,331,337,418 | PR_kwDODunzps48yTYu | 4,801 | Fix fine classes in trec dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4801). All of your documentation changes will be reflected on that endpoint."
] | 1,659,935,462,000 | 1,660,022,935,000 | null | MEMBER | null | This PR:
- replaces the fine labels, so that there are 50 instead of 47
- once more labels are added, all they (fine and coarse) have been re-ordered, so that they align with the order in: https://cogcomp.seas.upenn.edu/Data/QA/QC/definition.html
- the feature names have been fixed: `fine_label` instead of `label-fine`
- to sneak-case /underscores instead of hyphens)
- words have been reordered
Fix #4790. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4801/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4801",
"html_url": "https://github.com/huggingface/datasets/pull/4801",
"diff_url": "https://github.com/huggingface/datasets/pull/4801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4801.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4800/comments | https://api.github.com/repos/huggingface/datasets/issues/4800/events | https://github.com/huggingface/datasets/pull/4800 | 1,331,288,128 | PR_kwDODunzps48yIss | 4,800 | support LargeListArray in pyarrow | {
"login": "xwwwwww",
"id": 48146603,
"node_id": "MDQ6VXNlcjQ4MTQ2NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/48146603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xwwwwww",
"html_url": "https://github.com/xwwwwww",
"followers_url": "https://api.github.com/users/xwwwwww/followers",
"following_url": "https://api.github.com/users/xwwwwww/following{/other_user}",
"gists_url": "https://api.github.com/users/xwwwwww/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xwwwwww/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xwwwwww/subscriptions",
"organizations_url": "https://api.github.com/users/xwwwwww/orgs",
"repos_url": "https://api.github.com/users/xwwwwww/repos",
"events_url": "https://api.github.com/users/xwwwwww/events{/privacy}",
"received_events_url": "https://api.github.com/users/xwwwwww/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4800). All of your documentation changes will be reflected on that endpoint.",
"Hi, thanks for working on this! Can you run `make style` at the repo root to fix the code quality error in CI and add a test?",
"Hi, I have fixed the code quality error and added a test",
"It seems that CI fails due to the lack of memory for allocating a large array, while I pass the test locally."
] | 1,659,931,126,000 | 1,660,184,074,000 | null | CONTRIBUTOR | null | ```python
import numpy as np
import datasets
a = np.zeros((5000000, 768))
res = datasets.Dataset.from_dict({"embedding": a})
'''
File "/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/arrow_writer.py", line 178, in __arrow_array__
out = numpy_to_pyarrow_listarray(data)
File "/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/features/features.py", line 1173, in numpy_to_pyarrow_listarray
offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32())
File "pyarrow/array.pxi", line 312, in pyarrow.lib.array
File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 2147483904 not in range: -2147483648 to 2147483647
'''
```
Loading a large numpy array currently raises the error above as the type of offsets is `int32`.
And pyarrow has supported [LargeListArray](https://arrow.apache.org/docs/python/generated/pyarrow.LargeListArray.html) for this case.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4800/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4800",
"html_url": "https://github.com/huggingface/datasets/pull/4800",
"diff_url": "https://github.com/huggingface/datasets/pull/4800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4800.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4799/comments | https://api.github.com/repos/huggingface/datasets/issues/4799/events | https://github.com/huggingface/datasets/issues/4799 | 1,330,889,854 | I_kwDODunzps5PU8R- | 4,799 | video dataset loader/parser | {
"login": "nollied",
"id": 26421036,
"node_id": "MDQ6VXNlcjI2NDIxMDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/26421036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nollied",
"html_url": "https://github.com/nollied",
"followers_url": "https://api.github.com/users/nollied/followers",
"following_url": "https://api.github.com/users/nollied/following{/other_user}",
"gists_url": "https://api.github.com/users/nollied/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nollied/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nollied/subscriptions",
"organizations_url": "https://api.github.com/users/nollied/orgs",
"repos_url": "https://api.github.com/users/nollied/repos",
"events_url": "https://api.github.com/users/nollied/events{/privacy}",
"received_events_url": "https://api.github.com/users/nollied/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! We've just started discussing the video support in `datasets` (decoding backends, video feature type, etc.), so I believe we should have something tangible by the end of this year.\r\n\r\nAlso, if you have additional video features in mind that you would like to see, feel free to let us know",
"Coool thanks @mariosasko "
] | 1,659,837,252,000 | 1,660,063,371,000 | 1,660,063,371,000 | CONTRIBUTOR | null | you know how you can [use `load_dataset` with any arbitrary csv file](https://huggingface.co/docs/datasets/loading#csv)? and you can also [use it to load a local image dataset](https://huggingface.co/docs/datasets/image_load#local-files)?
could you please add functionality to load a video dataset? it would be really cool if i could point it to a bunch of video files and use pytorch to start looping through batches of videos. like if my batch size is 16, each sample in the batch is a frame from a video. i'm competing in the [minerl challenge](https://www.aicrowd.com/challenges/neurips-2022-minerl-basalt-competition), and it would be awesome to use the HF ecosystem. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4799/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4798/comments | https://api.github.com/repos/huggingface/datasets/issues/4798/events | https://github.com/huggingface/datasets/pull/4798 | 1,330,699,942 | PR_kwDODunzps48wVEG | 4,798 | Shard generator | {
"login": "marianna13",
"id": 43296932,
"node_id": "MDQ6VXNlcjQzMjk2OTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/43296932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marianna13",
"html_url": "https://github.com/marianna13",
"followers_url": "https://api.github.com/users/marianna13/followers",
"following_url": "https://api.github.com/users/marianna13/following{/other_user}",
"gists_url": "https://api.github.com/users/marianna13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marianna13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marianna13/subscriptions",
"organizations_url": "https://api.github.com/users/marianna13/orgs",
"repos_url": "https://api.github.com/users/marianna13/repos",
"events_url": "https://api.github.com/users/marianna13/events{/privacy}",
"received_events_url": "https://api.github.com/users/marianna13/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,659,777,246,000 | 1,659,777,268,000 | null | NONE | null | Hi everyone! I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to "split" these large datasets into chunks with equal size. Even better - be able to run through these chunks one by one in simple and convenient way. So I decided to add the method called shard_generator() to the main Dataset class. It works similar to shard method but it returns a generator of datasets with equal size (defined by shard_size attribute).
Example:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds
Dataset({
features: ['text', 'label'],
num_rows: 1066
})
>>> next(ds.shard_generator(300))
Dataset({
features: ['text', 'label'],
num_rows: 300
})
```
I hope it can be helpful to someone. Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4798/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4798",
"html_url": "https://github.com/huggingface/datasets/pull/4798",
"diff_url": "https://github.com/huggingface/datasets/pull/4798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4798.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4797/comments | https://api.github.com/repos/huggingface/datasets/issues/4797/events | https://github.com/huggingface/datasets/pull/4797 | 1,330,000,998 | PR_kwDODunzps48uL-t | 4,797 | Torgo dataset creation | {
"login": "YingLi001",
"id": 75192317,
"node_id": "MDQ6VXNlcjc1MTkyMzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/75192317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YingLi001",
"html_url": "https://github.com/YingLi001",
"followers_url": "https://api.github.com/users/YingLi001/followers",
"following_url": "https://api.github.com/users/YingLi001/following{/other_user}",
"gists_url": "https://api.github.com/users/YingLi001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YingLi001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YingLi001/subscriptions",
"organizations_url": "https://api.github.com/users/YingLi001/orgs",
"repos_url": "https://api.github.com/users/YingLi001/repos",
"events_url": "https://api.github.com/users/YingLi001/events{/privacy}",
"received_events_url": "https://api.github.com/users/YingLi001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @YingLi001, thanks for your proposal to add this dataset.\r\n\r\nHowever, now we add datasets directly to the Hub (instead of our GitHub repository). You have the instructions in our docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Create a dataset card](https://huggingface.co/docs/datasets/dataset_card)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nFeel free to ask if you need any additional support/help."
] | 1,659,709,106,000 | 1,660,070,760,000 | 1,660,070,760,000 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4797/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4797",
"html_url": "https://github.com/huggingface/datasets/pull/4797",
"diff_url": "https://github.com/huggingface/datasets/pull/4797.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4797.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4796/comments | https://api.github.com/repos/huggingface/datasets/issues/4796/events | https://github.com/huggingface/datasets/issues/4796 | 1,329,887,810 | I_kwDODunzps5PRHpC | 4,796 | ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB when adding image to Dataset | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,659,703,279,000 | 1,659,969,650,000 | null | CONTRIBUTOR | null | ## Describe the bug
When adding a Pillow image to an existing Dataset on the hub, `add_item` fails due to the Pillow image not being automatically converted into the Image feature.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from PIL import Image
dataset = load_dataset("hf-internal-testing/example-documents")
# load any random Pillow image
image = Image.open("/content/cord_example.png").convert("RGB")
new_image = {'image': image}
dataset['test'] = dataset['test'].add_item(new_image)
```
## Expected results
The image should be automatically casted to the Image feature when using `add_item`. For now, this can be fixed by using `encode_example`:
```
import datasets
feature = datasets.Image(decode=False)
new_image = {'image': feature.encode_example(image)}
dataset['test'] = dataset['test'].add_item(new_image)
```
## Actual results
```
ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=576x864 at 0x7F7CCC4589D0> with type Image: did not recognize Python value type when inferring an Arrow data type
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4796/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4795/comments | https://api.github.com/repos/huggingface/datasets/issues/4795/events | https://github.com/huggingface/datasets/issues/4795 | 1,329,525,732 | I_kwDODunzps5PPvPk | 4,795 | MBPP splits | {
"login": "stadlerb",
"id": 2452384,
"node_id": "MDQ6VXNlcjI0NTIzODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2452384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stadlerb",
"html_url": "https://github.com/stadlerb",
"followers_url": "https://api.github.com/users/stadlerb/followers",
"following_url": "https://api.github.com/users/stadlerb/following{/other_user}",
"gists_url": "https://api.github.com/users/stadlerb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stadlerb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stadlerb/subscriptions",
"organizations_url": "https://api.github.com/users/stadlerb/orgs",
"repos_url": "https://api.github.com/users/stadlerb/repos",
"events_url": "https://api.github.com/users/stadlerb/events{/privacy}",
"received_events_url": "https://api.github.com/users/stadlerb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Thanks for reporting this as well, @stadlerb.\r\n\r\nI suggest waiting for the answer of the data owners... "
] | 1,659,682,261,000 | 1,659,935,594,000 | null | NONE | null | (@albertvillanova)
The [MBPP dataset on the Hub](https://huggingface.co/datasets/mbpp) has only a test split for both its "full" and its "sanitized" subset, while the [paper](https://arxiv.org/abs/2108.07732) states in subsection 2.1 regarding the full split:
> In the experiments described later in the paper, we hold out 10 problems for **few-shot prompting**, another 500 as our **test** dataset (which is used to evaluate both few-shot inference and fine-tuned models), 374 problems for **fine-tuning**, and the rest for **validation**.
If the dataset on the Hub should reproduce most closely what the original authors use, I guess this four-way split should be reflected.
The paper doesn't explicitly state the task_id ranges of the splits, but the [GitHub readme](https://github.com/google-research/google-research/tree/master/mbpp) referenced in the paper specifies exact task_id ranges, although it misstates the total number of samples:
> We specify a train and test split to use for evaluation. Specifically:
>
> * Task IDs 11-510 are used for evaluation.
> * Task IDs 1-10 and 511-1000 are used for training and/or prompting. We typically used 1-10 for few-shot prompting, although you can feel free to use any of the training examples.
I.e. the few-shot, train and validation splits are combined into one split, with a soft suggestion of using the first ten for few-shot prompting. It is not explicitly stated whether the 374 fine-tuning samples mentioned in the paper have task_id 511 to 784 or 601 to 974 or are randomly sampled from task_id 511 to 974.
Regarding the "sanitized" split the paper states the following:
> For evaluations involving the edited dataset, we perform comparisons with 100 problems that appear in both the original and edited dataset, using the same held out 10 problems for few-shot prompting and 374 problems for fine-tuning.
The statement doesn't appear to be very precise, as among the 10 few-shot problems, those with task_id 1, 5 and 10 are not even part of the sanitized variant, and many from the task_id range from 511 to 974 are missing (e.g. task_id 511 to 553). I suppose the idea the task_id ranges for each split remain the same, even if some of the task_ids are not present. That would result in 7 few-shot, 257 test, 141 train and 22 validation examples in the sanitized split. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4795/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4792/comments | https://api.github.com/repos/huggingface/datasets/issues/4792/events | https://github.com/huggingface/datasets/issues/4792 | 1,328,593,929 | I_kwDODunzps5PMLwJ | 4,792 | Add DocVQA | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Thanks for proposing, @NielsRogge.\r\n\r\nPlease, note this dataset requires registering in their website and their Terms and Conditions state we cannot distribute their URL:\r\n```\r\n1. You will NOT distribute the download URLs\r\n...\r\n```"
] | 1,659,618,446,000 | 1,659,936,680,000 | null | CONTRIBUTOR | null | ## Adding a Dataset
- **Name:** DocVQA
- **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a “purpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this information.
- **Paper:** https://arxiv.org/abs/2007.00398
- **Data:** https://www.docvqa.org/datasets/docvqa
- **Motivation:** Models like LayoutLM and Donut in the Transformers library are fine-tuned on DocVQA. Would be very handy to directly load this dataset from the hub.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4792/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4791/comments | https://api.github.com/repos/huggingface/datasets/issues/4791/events | https://github.com/huggingface/datasets/issues/4791 | 1,328,571,064 | I_kwDODunzps5PMGK4 | 4,791 | Dataset Viewer issue for Team-PIXEL/rendered-wikipedia-english | {
"login": "xplip",
"id": 25847814,
"node_id": "MDQ6VXNlcjI1ODQ3ODE0",
"avatar_url": "https://avatars.githubusercontent.com/u/25847814?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xplip",
"html_url": "https://github.com/xplip",
"followers_url": "https://api.github.com/users/xplip/followers",
"following_url": "https://api.github.com/users/xplip/following{/other_user}",
"gists_url": "https://api.github.com/users/xplip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xplip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xplip/subscriptions",
"organizations_url": "https://api.github.com/users/xplip/orgs",
"repos_url": "https://api.github.com/users/xplip/repos",
"events_url": "https://api.github.com/users/xplip/events{/privacy}",
"received_events_url": "https://api.github.com/users/xplip/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting. It's a known issue that should be fixed soon. Meanwhile, I had to manually trigger the dataset viewer. It's OK now.\r\nNote that the extreme aspect ratio of the images generates another issue, that we're inspecting."
] | 1,659,617,356,000 | 1,659,620,596,000 | 1,659,620,596,000 | NONE | null | ### Link
https://huggingface.co/datasets/Team-PIXEL/rendered-wikipedia-english/viewer/rendered-wikipedia-en/train
### Description
The dataset can be loaded fine but the viewer shows this error:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
I'm guessing this is because I recently renamed the dataset. Based on related issues (e.g. https://github.com/huggingface/datasets/issues/4759) , is there something server-side that needs to be refreshed?
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4791/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4790/comments | https://api.github.com/repos/huggingface/datasets/issues/4790/events | https://github.com/huggingface/datasets/issues/4790 | 1,328,546,904 | I_kwDODunzps5PMARY | 4,790 | Issue with fine classes in trec dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,659,616,131,000 | 1,659,622,296,000 | null | MEMBER | null | ## Describe the bug
According to their paper, the TREC dataset contains 2 kinds of classes:
- 6 coarse classes: TREC-6
- 50 fine classes: TREC-50
However, our implementation only has 47 (instead of 50) fine classes. The reason for this is that we only considered the last segment of the label, which is repeated for several coarse classes:
- We have one `desc` fine label instead of 2:
- `DESC:desc`
- `HUM:desc`
- We have one `other` fine label instead of 3:
- `ENTY:other`
- `LOC:other`
- `NUM:other`
From their paper:
> We define a two-layered taxonomy, which represents a natural semantic classification for typical answers in the TREC task. The hierarchy contains 6 coarse classes and 50 fine classes,
> Each coarse class contains a non-overlapping set of fine classes.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4790/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4789/comments | https://api.github.com/repos/huggingface/datasets/issues/4789/events | https://github.com/huggingface/datasets/pull/4789 | 1,328,409,253 | PR_kwDODunzps48o3Kk | 4,789 | Update doc upload_dataset.mdx | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4789). All of your documentation changes will be reflected on that endpoint."
] | 1,659,608,640,000 | 1,659,609,064,000 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4789/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4789",
"html_url": "https://github.com/huggingface/datasets/pull/4789",
"diff_url": "https://github.com/huggingface/datasets/pull/4789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4789.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4788/comments | https://api.github.com/repos/huggingface/datasets/issues/4788/events | https://github.com/huggingface/datasets/pull/4788 | 1,328,246,021 | PR_kwDODunzps48oUNx | 4,788 | Fix NonMatchingChecksumError in mbpp dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you for the quick response! Before noticing that you already had implemented the fix, I already had implemened my own version. I'd also suggest bumping the major version because the contents of the dataset changed, even if only slightly.\r\nI'll attach my version of the affected files: [mbpp-checksum-changes.zip](https://github.com/huggingface/datasets/files/9258161/mbpp-checksum-changes.zip).",
"Hi @stadlerb, thanks for your feedback.\r\n\r\nWe normally update the major version whenever there is a new dataset release, usually with a breaking change in schema. The patch version is updated whenever there is a small correction in the dataset that does not change its schema.\r\n\r\nAs a side note for future contributions, please note that this dataset is hosted in our library GitHub repository. Therefore, the PRs to GitHub-hosted datasets needs being done through GitHub.\r\n\r\nCurrently added datasets are hosted on the Hub and for them, PRs can be done through the Hub.",
"I just noticed another problem with the dataset: The [GitHub page](https://github.com/google-research/google-research/tree/master/mbpp) and the [paper](http://arxiv.org/abs/2108.07732) mention a train-test split, which is not reflected in the dataloader. I'll open a new issue regarding this later."
] | 1,659,601,060,000 | 1,659,634,440,000 | 1,659,633,661,000 | MEMBER | null | Fix issue reported on the Hub: https://huggingface.co/datasets/mbpp/discussions/1
Fix #4787. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4788/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4788",
"html_url": "https://github.com/huggingface/datasets/pull/4788",
"diff_url": "https://github.com/huggingface/datasets/pull/4788.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4788.patch",
"merged_at": 1659633661000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4787/comments | https://api.github.com/repos/huggingface/datasets/issues/4787/events | https://github.com/huggingface/datasets/issues/4787 | 1,328,243,911 | I_kwDODunzps5PK2TH | 4,787 | NonMatchingChecksumError in mbpp dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,659,600,951,000 | 1,659,633,661,000 | 1,659,633,661,000 | MEMBER | null | ## Describe the bug
As reported on the Hub [Fix Checksum Mismatch](https://huggingface.co/datasets/mbpp/discussions/1), there is a `NonMatchingChecksumError` when loading mbpp dataset
## Steps to reproduce the bug
```python
ds = load_dataset("mbpp", "full")
```
## Expected results
Loading of the dataset without any exception raised.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-1-a3fbdd3ed82e> in <module>
----> 1 ds = load_dataset("mbpp", "full")
.../huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1791
1792 # Download and prepare data
-> 1793 builder_instance.download_and_prepare(
1794 download_config=download_config,
1795 download_mode=download_mode,
.../huggingface/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
.../huggingface/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1225
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1228
1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
.../huggingface/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
--> 775 verify_checksums(
776 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
777 )
.../huggingface/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://raw.githubusercontent.com/google-research/google-research/master/mbpp/mbpp.jsonl']
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4787/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4786/comments | https://api.github.com/repos/huggingface/datasets/issues/4786/events | https://github.com/huggingface/datasets/issues/4786 | 1,327,340,828 | I_kwDODunzps5PHZ0c | 4,786 | .save_to_disk('path', fs=s3) TypeError | {
"login": "hongknop",
"id": 110547763,
"node_id": "U_kgDOBpbTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/110547763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hongknop",
"html_url": "https://github.com/hongknop",
"followers_url": "https://api.github.com/users/hongknop/followers",
"following_url": "https://api.github.com/users/hongknop/following{/other_user}",
"gists_url": "https://api.github.com/users/hongknop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hongknop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hongknop/subscriptions",
"organizations_url": "https://api.github.com/users/hongknop/orgs",
"repos_url": "https://api.github.com/users/hongknop/repos",
"events_url": "https://api.github.com/users/hongknop/events{/privacy}",
"received_events_url": "https://api.github.com/users/hongknop/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,659,538,169,000 | 1,659,540,180,000 | 1,659,540,180,000 | NONE | null | The following code:
```python
import datasets
train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"])
s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)
train_dataset.save_to_disk("s3://datasets/", fs=s3)
```
produces following traceback:
```shell
File "C:\Users\Hong Knop\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\auth.py", line 374, in scope
return '/'.join(scope)
```
I invoke print(scope) in <auth.py> (line 373) and find this:
```python
[('4VA08VLL3VTKQJKCAI8M',), '20220803', 'us-east-1', 's3', 'aws4_request']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4786/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4785/comments | https://api.github.com/repos/huggingface/datasets/issues/4785/events | https://github.com/huggingface/datasets/pull/4785 | 1,327,225,826 | PR_kwDODunzps48k8y4 | 4,785 | Require torchaudio<0.12.0 in docs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,659,533,520,000 | 1,659,539,263,000 | 1,659,538,336,000 | MEMBER | null | This PR adds to docs the requirement of torchaudio<0.12.0 to avoid RuntimeError.
Subsequent to PR:
- #4777 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4785/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4785",
"html_url": "https://github.com/huggingface/datasets/pull/4785",
"diff_url": "https://github.com/huggingface/datasets/pull/4785.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4785.patch",
"merged_at": 1659538336000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4784/comments | https://api.github.com/repos/huggingface/datasets/issues/4784/events | https://github.com/huggingface/datasets/issues/4784 | 1,326,395,280 | I_kwDODunzps5PDy-Q | 4,784 | Add Multiface dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [
"Hi @osanseviero I would like to add this dataset.",
"Hey @nandwalritik! Thanks for offering to help!\r\n\r\nThis dataset might be somewhat complex and I'm concerned about it being 65 TB, which would be quite expensive to host. @lhoestq @mariosasko I would love your input if you think it's worth adding this dataset.",
"Thanks for proposing this interesting dataset, @osanseviero.\r\n\r\nPlease note that the data files are already hosted in a third-party server: e.g. the index of data files for entity \"6795937\" is at https://fb-baas-f32eacb9-8abb-11eb-b2b8-4857dd089e15.s3.amazonaws.com/MugsyDataRelease/v0.0/identities/6795937/index.html \r\n- audio.tar: https://fb-baas-f32eacb9-8abb-11eb-b2b8-4857dd089e15.s3.amazonaws.com/MugsyDataRelease/v0.0/identities/6795937/audio.tar\r\n- ...\r\n\r\nTherefore, in principle, we don't need to host them on our Hub: it would be enough to just implement a loading script in the corresponding Hub dataset repo, e.g. \"facebook/multiface\"..."
] | 1,659,474,022,000 | 1,659,969,756,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** Multiface dataset
- **Description:** f high quality recordings of the faces of 13 identities, each captured in a multi-view capture stage performing various facial expressions. An average of 12,200 (v1 scripts) to 23,000 (v2 scripts) frames per subject with capture rate at 30 fps
- **Data:** https://github.com/facebookresearch/multiface
The whole dataset is 65TB though, so I'm not sure
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4784/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4783/comments | https://api.github.com/repos/huggingface/datasets/issues/4783/events | https://github.com/huggingface/datasets/pull/4783 | 1,326,375,011 | PR_kwDODunzps48iHey | 4,783 | [WIP] Docs for creating a loading script for image datasets | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4783). All of your documentation changes will be reflected on that endpoint."
] | 1,659,472,563,000 | 1,659,560,623,000 | null | MEMBER | null | This PR is a first draft of creating a loading script for image datasets. Feel free to let me know if there are any specificities I'm missing for this. 🙂
To do:
- [x] Document how to create different configurations. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4783/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4783",
"html_url": "https://github.com/huggingface/datasets/pull/4783",
"diff_url": "https://github.com/huggingface/datasets/pull/4783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4783.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4782/comments | https://api.github.com/repos/huggingface/datasets/issues/4782/events | https://github.com/huggingface/datasets/issues/4782 | 1,326,247,158 | I_kwDODunzps5PDOz2 | 4,782 | pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648 | {
"login": "conceptofmind",
"id": 25208228,
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conceptofmind",
"html_url": "https://github.com/conceptofmind",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Thanks for reporting @conceptofmind.\r\n\r\nCould you please give details about your environment? \r\n```\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```",
"Hi @albertvillanova ,\r\n\r\nHere is the environment information:\r\n```\r\n- `datasets` version: 2.3.2\r\n- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.27\r\n- Python version: 3.9.12\r\n- PyArrow version: 7.0.0\r\n- Pandas version: 1.4.2\r\n```\r\nThanks,\r\n\r\nEnrico"
] | 1,659,465,365,000 | 1,659,545,056,000 | null | NONE | null | ## Describe the bug
Following the example in CodeParrot, I receive an array size limitation error when deduplicating larger datasets.
## Steps to reproduce the bug
```python
dataset_name = "the_pile"
ds = load_dataset(dataset_name, split="train")
ds = ds.map(preprocess, num_proc=num_workers)
uniques = set(ds.unique("hash"))
```
Gists for minimum reproducible example:
https://gist.github.com/conceptofmind/c5804428ea1bd89767815f9cd5f02d9a
https://gist.github.com/conceptofmind/feafb07e236f28d79c2d4b28ffbdb6e2
## Expected results
Chunking and writing out a deduplicated dataset.
## Actual results
```
return dataset._data.column(column).unique().to_pylist()
File "pyarrow/table.pxi", line 394, in pyarrow.lib.ChunkedArray.unique
File "pyarrow/_compute.pyx", line 531, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 330, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 124, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4782/timeline | null | null | null | null | false |
Dataset Card for GitHub Issues
Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the task-category-tag
with an appropriate other:other-task-name
).
task-category-tag
: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name. The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide BCP-47 codes, which consist of a primary language subtag, with a script subtag and/or region subtag if available.
Dataset Structure
Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
{
'example_field': ...,
...
}
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
example_field
: description ofexample_field
Note that the descriptions can be initialized with the Show Markdown Data Fields output of the tagging app, you will then only need to refine the generated descriptions.
Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
Tain | Valid | Test | |
---|---|---|---|
Input Sentences | |||
Average Sentence Length |
Dataset Creation
Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
Considerations for Using the Data
Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
Additional Information
Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
Licensing Information
Provide the license and link to the license webpage if available.
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
If the dataset has a DOI, please provide it here.
Contributions
Thanks to @lewtun for adding this dataset.
- Downloads last month
- 4