url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.47B
node_id
stringlengths
18
32
number
int64
1
5.33k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4922/comments
https://api.github.com/repos/huggingface/datasets/issues/4922/events
https://github.com/huggingface/datasets/issues/4922
1,357,684,018
I_kwDODunzps5Q7J0y
4,922
I/O error on Google Colab in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/5595043?v=4", "events_url": "https://api.github.com/users/jotterbach/events{/privacy}", "followers_url": "https://api.github.com/users/jotterbach/followers", "following_url": "https://api.github.com/users/jotterbach/following{/other_user}", "gists_url": "https://api.github.com/users/jotterbach/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jotterbach", "id": 5595043, "login": "jotterbach", "node_id": "MDQ6VXNlcjU1OTUwNDM=", "organizations_url": "https://api.github.com/users/jotterbach/orgs", "received_events_url": "https://api.github.com/users/jotterbach/received_events", "repos_url": "https://api.github.com/users/jotterbach/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jotterbach/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jotterbach/subscriptions", "type": "User", "url": "https://api.github.com/users/jotterbach" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2022-08-31T18:08:26Z
2022-08-31T18:15:48Z
2022-08-31T18:15:48Z
NONE
null
null
null
## Describe the bug When trying to load a streaming dataset in Google Colab the loading fails with an I/O error ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION) list(hf_ds.take(5)) ``` ## Expected results It should load five data points ## Actual results ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-13-7b5b8b1e7e58>](https://localhost:8080/#) in <module> 2 from datasets import load_dataset 3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION) ----> 4 list(hf_ds.take(5)) 6 frames [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 716 717 def __iter__(self): --> 718 for key, example in self._iter(): 719 if self.features: 720 # `IterableDataset` automatically fills missing columns with None. [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in _iter(self) 706 else: 707 ex_iterable = self._ex_iterable --> 708 yield from ex_iterable 709 710 def _iter_shard(self, shard_idx: int): [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 582 583 def __iter__(self): --> 584 yield from islice(self.ex_iterable, self.n) 585 586 def shuffle_data_sources(self, generator: np.random.Generator) -> "TakeExamplesIterable": [/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self) 110 111 def __iter__(self): --> 112 yield from self.generate_examples_fn(**self.kwargs) 113 114 def shuffle_data_sources(self, generator: np.random.Generator) -> "ExamplesIterable": [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _generate_examples(self, split_subsets, extraction_map, with_translation) 845 raise ValueError("Invalid number of files: %d" % len(files)) 846 --> 847 for sub_key, ex in sub_generator(*sub_generator_args): 848 if not all(ex.values()): 849 continue [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _parse_parallel_sentences(f1, f2, filename1, filename2) 923 l2_sentences, l2 = parse_file(f2_i, filename2) 924 --> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)): 926 key = f"{f_id}/{line_id}" 927 yield key, {l1: s1, l2: s2} [~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in gen() 895 896 def gen(): --> 897 with open(path, encoding="utf-8") as f: 898 for line in f: 899 seg_match = re.match(seg_re, line) ValueError: I/O operation on closed file. ``` ## Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.4.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0) - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4922/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4922/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/4921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4921/comments
https://api.github.com/repos/huggingface/datasets/issues/4921/events
https://github.com/huggingface/datasets/pull/4921
1,357,609,003
PR_kwDODunzps4-JVFV
4,921
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-31T16:52:27Z
2022-09-22T14:34:11Z
2022-09-01T05:04:53Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4921.diff", "html_url": "https://github.com/huggingface/datasets/pull/4921", "merged_at": "2022-09-01T05:04:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/4921.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4921" }
Fix missing tags in dataset cards: - eraser_multi_rc - hotpot_qa - metooma - movie_rationales - qanta - quora - quoref - race - ted_hrlr - ted_talks_iwslt This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4921/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4921/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4920/comments
https://api.github.com/repos/huggingface/datasets/issues/4920/events
https://github.com/huggingface/datasets/issues/4920
1,357,564,589
I_kwDODunzps5Q6sqt
4,920
Unable to load local tsv files through load_dataset method
{ "avatar_url": "https://avatars.githubusercontent.com/u/44038517?v=4", "events_url": "https://api.github.com/users/DataNoob0723/events{/privacy}", "followers_url": "https://api.github.com/users/DataNoob0723/followers", "following_url": "https://api.github.com/users/DataNoob0723/following{/other_user}", "gists_url": "https://api.github.com/users/DataNoob0723/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DataNoob0723", "id": 44038517, "login": "DataNoob0723", "node_id": "MDQ6VXNlcjQ0MDM4NTE3", "organizations_url": "https://api.github.com/users/DataNoob0723/orgs", "received_events_url": "https://api.github.com/users/DataNoob0723/received_events", "repos_url": "https://api.github.com/users/DataNoob0723/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DataNoob0723/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DataNoob0723/subscriptions", "type": "User", "url": "https://api.github.com/users/DataNoob0723" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_files=data_files)\r\n``` " ]
2022-08-31T16:13:39Z
2022-09-01T05:31:30Z
2022-09-01T05:31:30Z
NONE
null
null
null
## Describe the bug Unable to load local tsv files through load_dataset method. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug data_files = { 'train': 'train.tsv', 'test': 'test.tsv' } raw_datasets = load_dataset('tsv', data_files=data_files) ## Expected results I am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions. ## Actual results --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-9-24207899c1af>](https://localhost:8080/#) in <module> ----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv') 2 frames [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1244 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1246 ) from None 1247 raise e1 from None 1248 else: FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/tsv/tsv.py ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4920/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4920/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4919/comments
https://api.github.com/repos/huggingface/datasets/issues/4919/events
https://github.com/huggingface/datasets/pull/4919
1,357,441,599
PR_kwDODunzps4-IxDZ
4,919
feat: improve error message on Keys mismatch. closes #4917
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "We are having an unrelated issue that makes several tests fail. We are working on that. Once fixed, you will be able to merge the main branch into this, so that you get the fix and the tests pass..." ]
2022-08-31T14:41:36Z
2022-09-05T08:46:01Z
2022-09-05T08:43:33Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4919.diff", "html_url": "https://github.com/huggingface/datasets/pull/4919", "merged_at": "2022-09-05T08:43:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/4919.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4919" }
Hi @lhoestq what do you think? Let me give you a code sample: ```py >>> import datasets >>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]}) >>> foo.save_to_disk('foo') # edit foo/dataset_info.json e.g. rename the 'foo' feature to 'baz' >>> datasets.load_from_disk('foo') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-4863e606b330> in <module> ----> 1 datasets.load_from_disk('foo') ~/code/datasets/src/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory) 1851 raise FileNotFoundError(f"Directory {dataset_path} not found") 1852 if fs.isfile(Path(dest_dataset_path, config.DATASET_INFO_FILENAME).as_posix()): -> 1853 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) 1854 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()): 1855 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) ~/code/datasets/src/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory) 1230 info=dataset_info, 1231 split=split, -> 1232 fingerprint=state["_fingerprint"], 1233 ) 1234 ~/code/datasets/src/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint) 687 self.info.features = inferred_features 688 else: # make sure the nested columns are in the right order --> 689 self.info.features = self.info.features.reorder_fields_as(inferred_features) 690 691 # Infer fingerprint if None ~/code/datasets/src/datasets/features/features.py in reorder_fields_as(self, other) 1771 return source 1772 -> 1773 return Features(recursive_reorder(self, other)) 1774 1775 def flatten(self, max_depth=16) -> "Features": ~/code/datasets/src/datasets/features/features.py in recursive_reorder(source, target, stack) 1760 f"{source.keys()-target.keys()} are missing from dataset.arrow " 1761 f"and {target.keys()-source.keys()} are missing from dataset_info.json"+stack_position) -> 1762 raise ValueError(message) 1763 return {key: recursive_reorder(source[key], target[key], stack + f".{key}") for key in target} 1764 elif isinstance(source, list): ValueError: Keys mismatch: between {'baz': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (dataset_info.json) and {'foo': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (inferred from dataset.arrow). {'baz'} are missing from dataset.arrow and {'foo'} are missing from dataset_info.json ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4919/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4919/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4918/comments
https://api.github.com/repos/huggingface/datasets/issues/4918/events
https://github.com/huggingface/datasets/issues/4918
1,357,242,757
I_kwDODunzps5Q5eGF
4,918
Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines
{ "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/finiteautomata", "id": 167943, "login": "finiteautomata", "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "repos_url": "https://api.github.com/users/finiteautomata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "type": "User", "url": "https://api.github.com/users/finiteautomata" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "Thanks for reporting, it's fixed now (I refreshed it manually). It's a known issue; we hope it will be fixed permanently in a few days.\r\n\r\n<img width=\"1508\" alt=\"Capture d’écran 2022-09-05 à 18 31 22\" src=\"https://user-images.githubusercontent.com/1676121/188489762-0ed86a7e-dfb3-46e8-a125-43b815a2c6f4.png\">\r\n", "Thanks @severo! " ]
2022-08-31T12:09:07Z
2022-09-05T21:36:34Z
2022-09-05T16:32:44Z
NONE
null
null
null
### Link https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines ### Description After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist. ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4918/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4918/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4917/comments
https://api.github.com/repos/huggingface/datasets/issues/4917/events
https://github.com/huggingface/datasets/issues/4917
1,357,193,841
I_kwDODunzps5Q5SJx
4,917
Keys mismatch: make error message more informative
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
[ "Good idea ! I think this can be improved in `Features.reorder_fields_as()` indeed at\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/features/features.py#L1739-L1740\r\n\r\nIs it something you would be interested in contributing ?", "Is this open to work on? I'd love to take on this as my first issue.", "Hi @daspartho I’ve opened a PR #4919 \r\nI don’t think there’s much left to do", "ok : )" ]
2022-08-31T11:24:34Z
2022-09-05T08:43:38Z
2022-09-05T08:43:38Z
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don’t know when/why/how this happens but it deserves its own issue), you will get an error message like: `ValueError: Keys mismatch: between {'bar': Value(dtype='int64', id=None)} and {'foo': Value(dtype='int64', id=None)}` Which is fine when you have only a few features like in the example but it gets very hard to read when you have a lot of features in your dataset. **Describe the solution you'd like** The error message should give the difference between the features (what keys are in A but missing in B and vice-versa). It should also tell which keys are inferred from `dataset.arrow` and which come from `dataset_info.json`. Willing to help :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4917/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4917/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4916/comments
https://api.github.com/repos/huggingface/datasets/issues/4916/events
https://github.com/huggingface/datasets/issues/4916
1,357,076,940
I_kwDODunzps5Q41nM
4,916
Apache Beam unable to write the downloaded wikipedia dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4", "events_url": "https://api.github.com/users/Shilpac20/events{/privacy}", "followers_url": "https://api.github.com/users/Shilpac20/followers", "following_url": "https://api.github.com/users/Shilpac20/following{/other_user}", "gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shilpac20", "id": 71849081, "login": "Shilpac20", "node_id": "MDQ6VXNlcjcxODQ5MDgx", "organizations_url": "https://api.github.com/users/Shilpac20/orgs", "received_events_url": "https://api.github.com/users/Shilpac20/received_events", "repos_url": "https://api.github.com/users/Shilpac20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions", "type": "User", "url": "https://api.github.com/users/Shilpac20" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "See:\r\n- #4915" ]
2022-08-31T09:39:25Z
2022-08-31T10:53:19Z
2022-08-31T10:53:19Z
NONE
null
null
null
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner') ``` ## Expected results to load the dataset ## Actual results I am pasting the error trace here: Downloading builder script: 35.9kB [00:00, ?B/s] Downloading metadata: 30.4kB [00:00, 1.94MB/s] Using custom data configuration 20220401.aa-date=20220401,language=aa Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it] Traceback (most recent call last): File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:/abc/temp.py", line 32, in beam_runner='DirectRunner') File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare pipeline_results = pipeline.run() File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run return self.runner.run_pipeline(self, self._options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline return runner.run_pipeline(pipeline, options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api return self.run_stages(stage_context, stages) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages runner_execution_context, bundle_context_manager, bundle_input) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle bundle_manager)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle data_input, data_output, input_timers, expected_timer_output) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle result_future = self._worker_handler.control_conn.push(process_bundle_req) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push response = self.worker.do_instruction(request) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction getattr(request, request_type), request.instruction_id) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle bundle_processor.process_bundle(instruction_id)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle element.data) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded self.output(decoded_value) File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ## Environment info Python: 3.7.6 Windows 10 Pro datasets :2.4.0 apache_beam: 2.41.0 mwparserfromhell: 0.6.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4916/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4916/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4915/comments
https://api.github.com/repos/huggingface/datasets/issues/4915/events
https://github.com/huggingface/datasets/issues/4915
1,356,009,042
I_kwDODunzps5Q0w5S
4,915
FileNotFoundError while downloading wikipedia dataset for any language
{ "avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4", "events_url": "https://api.github.com/users/Shilpac20/events{/privacy}", "followers_url": "https://api.github.com/users/Shilpac20/followers", "following_url": "https://api.github.com/users/Shilpac20/following{/other_user}", "gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shilpac20", "id": 71849081, "login": "Shilpac20", "node_id": "MDQ6VXNlcjcxODQ5MDgx", "organizations_url": "https://api.github.com/users/Shilpac20/orgs", "received_events_url": "https://api.github.com/users/Shilpac20/received_events", "repos_url": "https://api.github.com/users/Shilpac20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions", "type": "User", "url": "https://api.github.com/users/Shilpac20" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @Shilpac20,\r\n\r\nAs explained in the Wikipedia dataset card: https://huggingface.co/datasets/wikipedia\r\n> You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).\r\n\r\nThis means that, before passing a specific date, you should first make sure it is available online, as Wikimedia only keeps last X months (depending on the size of the corresponding language dump)): e.g. to see which dates \"aa\" Wikipedia is available online, see https://dumps.wikimedia.org/aawiki/ (as of today 2022-08-31, the available dates are from [20220401](https://dumps.wikimedia.org/aawiki/20220401/) to [20220820](https://dumps.wikimedia.org/aawiki/20220820/)).", "Hi, the date that I have specified \"20220401\" is available for the language \"aa\". The error persists for any other available dates as present in https://dumps.wikimedia.org/aawiki/. The error is mainly due to apache beam not able to write the downloaded files. Any help on this?", "I see, sorry, I misread your issue.\r\n\r\nWe are investigating this.", "I am struggling with basically the same issue. I am trying to download the German Wikipedia dump.\r\n\r\nAs per the [documentation](https://huggingface.co/datasets/wikipedia), `\"20220301.de\"` should be available as a pre-processed dataset.\r\n\r\nIssuing the command mentioned in the documentation cited above\r\n\r\n from datasets import load_dataset\r\n load_dataset(\"wikipedia\", \"20220301.de\")\r\n\r\nraises the following `FileNotFound` error\r\n\r\n FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/dewiki/20220301/dumpstatus.json\r\n\r\nUsing the ([undocumented](https://huggingface.co/docs/datasets/v1.2.1/package_reference/loading_methods.html#datasets.load_dataset)?) call to `load_dataset()` with `language` and `date` parameters\r\n\r\n load_dataset(\"wikipedia\", language=\"de\", date=\"20220301\", beam_runner=\"DirectRunner\")\r\n\r\nproduces the same error.\r\n\r\nEDIT: as I am using `datasets` v2.7.1, I should be looking at [that version's documentation](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/loading_methods#datasets.load_dataset)! It is mentioned there, that additional `kwargs` are \"passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/builder_classes#datasets.DatasetBuilder)\". So I guess that is how `language` and `date` are used.\r\n\r\nAs I can see a folder `20221130` on `https://dumps.wikimedia.org/dewiki/`, I also tried\r\n\r\n from datasets import load_dataset\r\n load_dataset(\"wikipedia\", \"20221130.de\")\r\n\r\nwhich throws another error:\r\n\r\n ValueError: BuilderConfig 20221120.de not found. Available: ['20220301.aa', ... '20220301.de', ...\r\n\r\nbasically telling me that the dataset I originally requested (`'20220301.de'`) is available...\r\n\r\nIt seems that `load_dataset` is not handling the vanishing older dumps for Wikipedia correctly?", "I am able to start downloading the dataset when trying anything with the recent dumps for 20221201. But obviously, those are the big wiki dumps and I need the smaller preloaded version.\r\n\r\nI am now getting some error when the files show up in my cache but it will say FileNotFoundError at the end of the download for some reason. The cache directory to the datasets\\wikipedia\\date.bn\\ had something in it, then when the error came up it disappeared. \r\n\r\nIt is easy to test with the langauge \"bn\" because the amount of files is low.\r\n\r\ndataset = load_dataset('wikipedia', date=\"20221201\", language=\"bn\", split='train', beam_runner='DirectRunner')" ]
2022-08-30T16:15:46Z
2022-12-04T22:20:33Z
null
NONE
null
null
null
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. Environment: ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner') ``` ## Expected results to load the dataset ## Actual results I am pasting the error trace here: Downloading builder script: 35.9kB [00:00, ?B/s] Downloading metadata: 30.4kB [00:00, 1.94MB/s] Using custom data configuration 20220401.aa-date=20220401,language=aa Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559... Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s] Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it] Traceback (most recent call last): File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__ self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "G:/abc/temp.py", line 32, in <module> beam_runner='DirectRunner') File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare pipeline_results = pipeline.run() File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run return self.runner.run_pipeline(self, self._options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline return runner.run_pipeline(pipeline, options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline options) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api return self.run_stages(stage_context, stages) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages runner_execution_context, bundle_context_manager, bundle_input) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle bundle_manager)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle data_input, data_output, input_timers, expected_timer_output) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle result_future = self._worker_handler.control_conn.push(process_bundle_req) File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push response = self.worker.do_instruction(request) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction getattr(request, request_type), request.instruction_id) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle bundle_processor.process_bundle(instruction_id)) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle element.data) File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded self.output(decoded_value) File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer return FileBasedSinkWriter(self, writer_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in __init__ self.temp_handle = self.sink.open(temp_shard_path) File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open self._file_handle = super().open(temp_path) File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f return fnc(self, *args, **kwargs) File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open temp_path, self.mime_type, self.compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create return filesystem.create(path, mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create return self._path_open(path, 'wb', mime_type, compression_type) File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open raw_file = io.open(path, mode) RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Shilpa\\.cache\\huggingface\\datasets\\wikipedia\\20220401.aa-date=20220401,language=aa\\2.0.0\\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ## Environment info Python: 3.7.6 Windows 10 Pro datasets :2.4.0 apache_beam: 2.41.0 mwparserfromhell: 0.6.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4915/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4915/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/4914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4914/comments
https://api.github.com/repos/huggingface/datasets/issues/4914/events
https://github.com/huggingface/datasets/pull/4914
1,355,482,624
PR_kwDODunzps4-CFyN
4,914
Support streaming swda dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-30T09:46:28Z
2022-08-30T11:16:33Z
2022-08-30T11:14:16Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4914.diff", "html_url": "https://github.com/huggingface/datasets/pull/4914", "merged_at": "2022-08-30T11:14:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4914.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4914" }
Support streaming swda dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4914/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4914/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4913/comments
https://api.github.com/repos/huggingface/datasets/issues/4913/events
https://github.com/huggingface/datasets/pull/4913
1,355,232,007
PR_kwDODunzps4-BP00
4,913
Add license and citation information to cosmos_qa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-30T06:23:19Z
2022-08-30T09:49:31Z
2022-08-30T09:47:35Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4913.diff", "html_url": "https://github.com/huggingface/datasets/pull/4913", "merged_at": "2022-08-30T09:47:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/4913.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4913" }
This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0. This PR also updates the citation information.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4913/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4913/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4912/comments
https://api.github.com/repos/huggingface/datasets/issues/4912/events
https://github.com/huggingface/datasets/issues/4912
1,355,078,864
I_kwDODunzps5QxNzQ
4,912
datasets map() handles all data at a stroke and takes long time
{ "avatar_url": "https://avatars.githubusercontent.com/u/40711748?v=4", "events_url": "https://api.github.com/users/BruceStayHungry/events{/privacy}", "followers_url": "https://api.github.com/users/BruceStayHungry/followers", "following_url": "https://api.github.com/users/BruceStayHungry/following{/other_user}", "gists_url": "https://api.github.com/users/BruceStayHungry/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BruceStayHungry", "id": 40711748, "login": "BruceStayHungry", "node_id": "MDQ6VXNlcjQwNzExNzQ4", "organizations_url": "https://api.github.com/users/BruceStayHungry/orgs", "received_events_url": "https://api.github.com/users/BruceStayHungry/received_events", "repos_url": "https://api.github.com/users/BruceStayHungry/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BruceStayHungry/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceStayHungry/subscriptions", "type": "User", "url": "https://api.github.com/users/BruceStayHungry" }
[]
closed
false
null
[]
null
[ "Hi ! Interesting question ;)\r\n\r\n> Which is better? Process in map() or in data-collator\r\n\r\nAs you said, both can be used in practice: map() if you want to preprocess before training, or a data-collator (or the equivalent `dataset.set_transform`) if you want to preprocess on-the-fly during training. Both options are great and really depend on your case.\r\n\r\nTo choose between the two, here are IMO the main caveats of each approach:\r\n- if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n- on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\n> Why huggingface advises map() function? There should be some advantages to using map()\r\n\r\nTo get the best throughput when training a model, it is often recommended to preprocess your dataset before training. Note that preprocessing may include other steps before tokenization such as data filtering, cleaning, chunking etc. which are often done before training.", "Thanks for your clear explanation @lhoestq ! \r\n> * if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n> * on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\nI really agree with you. There should be some trade-off between processing before and during the train loop.\r\nBesides, I find `map()` function can cache the results once it has been executed. Very useful!", "I'm closing this issue if you don't mind, feel free to reopen if needed ;)" ]
2022-08-30T02:25:56Z
2022-09-06T09:23:35Z
2022-09-06T09:23:35Z
NONE
null
null
null
**1. Background** Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop. The corresponding code: ``` with accelerator.main_process_first(): tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, desc="Running tokenizer on every text in dataset" ) ``` **2. The problem** Thus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize. Also, we can choose to tokenize data in `data-collator`. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization. **3. My question** As described above, my questions are: * **Which is better? Process in `map()` or in `data-collator`** * **Why huggingface advises `map()` function?** There should be some advantages to using `map()` Thanks for your answers!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4912/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4912/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4911/comments
https://api.github.com/repos/huggingface/datasets/issues/4911/events
https://github.com/huggingface/datasets/issues/4911
1,354,426,978
I_kwDODunzps5Quupi
4,911
[Tests] Ensure `datasets` supports renamed repositories
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues", "id": 3761482852, "name": "good second issue", "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue" } ]
open
false
null
[]
null
[ "You could also switch to using `huggingface_hub` more directly, where such a guarantee is already tested =)\r\n\r\ncc @Wauplin " ]
2022-08-29T14:46:14Z
2022-08-29T15:31:03Z
null
MEMBER
null
null
null
On https://hf.co/datasets you can rename a dataset (or sometimes move it to another user/org). The website handles redirections correctly and AFAIK `datasets` does as well. However it would be nice to have an integration test to make sure we don't break support for renamed datasets. To implement this we can use the /api/repos/move endpoint on hub-ci to rename/move a repo (it is documented at https://huggingface.co/docs/hub/api)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4911/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4911/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4910/comments
https://api.github.com/repos/huggingface/datasets/issues/4910/events
https://github.com/huggingface/datasets/issues/4910
1,354,374,328
I_kwDODunzps5Quhy4
4,910
Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder()
{ "avatar_url": "https://avatars.githubusercontent.com/u/57184353?v=4", "events_url": "https://api.github.com/users/bablf/events{/privacy}", "followers_url": "https://api.github.com/users/bablf/followers", "following_url": "https://api.github.com/users/bablf/following{/other_user}", "gists_url": "https://api.github.com/users/bablf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bablf", "id": 57184353, "login": "bablf", "node_id": "MDQ6VXNlcjU3MTg0MzUz", "organizations_url": "https://api.github.com/users/bablf/orgs", "received_events_url": "https://api.github.com/users/bablf/received_events", "repos_url": "https://api.github.com/users/bablf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bablf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bablf/subscriptions", "type": "User", "url": "https://api.github.com/users/bablf" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4", "events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}", "followers_url": "https://api.github.com/users/thepurpleowl/followers", "following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}", "gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thepurpleowl", "id": 21123710, "login": "thepurpleowl", "node_id": "MDQ6VXNlcjIxMTIzNzEw", "organizations_url": "https://api.github.com/users/thepurpleowl/orgs", "received_events_url": "https://api.github.com/users/thepurpleowl/received_events", "repos_url": "https://api.github.com/users/thepurpleowl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions", "type": "User", "url": "https://api.github.com/users/thepurpleowl" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4", "events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}", "followers_url": "https://api.github.com/users/thepurpleowl/followers", "following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}", "gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thepurpleowl", "id": 21123710, "login": "thepurpleowl", "node_id": "MDQ6VXNlcjIxMTIzNzEw", "organizations_url": "https://api.github.com/users/thepurpleowl/orgs", "received_events_url": "https://api.github.com/users/thepurpleowl/received_events", "repos_url": "https://api.github.com/users/thepurpleowl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions", "type": "User", "url": "https://api.github.com/users/thepurpleowl" } ]
null
[ "I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https://huggingface.co/docs/datasets/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0", "In my case, this was happening because I defined multiple `BuilderConfig` for multiple types, but didn't had all the data files that are requierd by those configs. \r\n\r\nI think this is different than the original issue by @bablf .", "Hi ! I think this can be fixed by letting the config_kwargs take over the builder kwargs here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/load.py#L1533-L1534\r\n\r\nmaybe something like this ?\r\n```python\r\n **{**builder_kwargs, **config_kwargs}\r\n```\r\n\r\nLet me know if you'd like to contribute and fix this bug, so I can assign you :)\r\n\r\n> In my case, this was happening because I defined multiple BuilderConfig for multiple types, but didn't had all the data files that are requierd by those configs.\r\n> \r\n> I think this is different than the original issue by @bablf .\r\n\r\nFeel free to to open an new issue, I'd be happy to help\r\n", "@lhoestq Yeah, I want to, please assign.", "Cool thank you ! Let me know if you have questions or if I can help", "@lhoestq On second thoughts, I think this might be expected behavior; although a better error message might help.\r\n\r\nReasoning: Given n configs, if no data file is provided for any config, then it should be an error. Then why it should not be the case if out of n configs, for some data files are provided but not for others. Also, I was using `--all_configs` flag with `dataset-cli test`.", "Ok I see - maybe we should check the values of builder_kwargs raise an error if any key in config_kwargs tries to overwrite it ? The builder kwargs are determined from the builder's type and location (in some cases it forces the base_path, data_files and config name for example)" ]
2022-08-29T14:11:48Z
2022-09-13T11:58:46Z
null
NONE
null
null
null
## Describe the bug In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz"). I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be ```python builder_cls = import_main_class(dataset_module.module_path) builder_kwargs = dataset_module.builder_kwargs data_files = builder_kwargs.pop("data_files", data_files) config_name = builder_kwargs.pop("config_name", name) hash = builder_kwargs.pop("hash") base_path = builder_kwargs.pop("base_path") ``` and then pass base_path into `builder_cls`. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("rotten_tomatoes", base_path="./sample_data") ``` ## Expected results The docs state: `**config_kwargs` — Keyword arguments to be passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.DatasetBuilder). So I would expect to be able to pass the base_path into `load_dataset()`. ## Actual results TypeError("type object got multiple values for keyword argument "base_path"). ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.8.9 - PyArrow version: 9.0.0
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4910/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4910/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4909/comments
https://api.github.com/repos/huggingface/datasets/issues/4909/events
https://github.com/huggingface/datasets/pull/4909
1,353,997,788
PR_kwDODunzps499Fhe
4,909
Update GLUE evaluation metadata
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-29T09:43:44Z
2022-08-29T14:53:29Z
2022-08-29T14:51:18Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4909.diff", "html_url": "https://github.com/huggingface/datasets/pull/4909", "merged_at": "2022-08-29T14:51:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/4909.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4909" }
This PR updates the evaluation metadata for GLUE to: * Include defaults for all configs except `ax` (which only has a `test` split with no known labels) * Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private) * Fix the `task_id` for some existing defaults cc @sashavor @douwekiela
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4909/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4909/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4908/comments
https://api.github.com/repos/huggingface/datasets/issues/4908/events
https://github.com/huggingface/datasets/pull/4908
1,353,995,574
PR_kwDODunzps499FDS
4,908
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-29T09:41:53Z
2022-09-22T14:35:56Z
2022-08-29T16:13:07Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4908.diff", "html_url": "https://github.com/huggingface/datasets/pull/4908", "merged_at": "2022-08-29T16:13:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4908.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4908" }
Fix missing tags in dataset cards: - asnq - clue - common_gen - cosmos_qa - guardian_authorship - hindi_discourse - py_ast - x_stance This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4908/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4908/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4907/comments
https://api.github.com/repos/huggingface/datasets/issues/4907/events
https://github.com/huggingface/datasets/issues/4907
1,353,808,348
I_kwDODunzps5QsXnc
4,907
None Type error for swda datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4", "events_url": "https://api.github.com/users/hannan72/events{/privacy}", "followers_url": "https://api.github.com/users/hannan72/followers", "following_url": "https://api.github.com/users/hannan72/following{/other_user}", "gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hannan72", "id": 8229163, "login": "hannan72", "node_id": "MDQ6VXNlcjgyMjkxNjM=", "organizations_url": "https://api.github.com/users/hannan72/orgs", "received_events_url": "https://api.github.com/users/hannan72/received_events", "repos_url": "https://api.github.com/users/hannan72/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hannan72/subscriptions", "type": "User", "url": "https://api.github.com/users/hannan72" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?", "Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.", "Ok, let us know if you encounter the issue again ;)" ]
2022-08-29T07:05:20Z
2022-08-30T14:43:41Z
2022-08-30T14:43:41Z
NONE
null
null
null
## Describe the bug I got `'NoneType' object is not callable` error while calling the swda datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("swda") ``` ## Expected results Run without error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Python version: 3.8.10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4907/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4907/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4906/comments
https://api.github.com/repos/huggingface/datasets/issues/4906/events
https://github.com/huggingface/datasets/issues/4906
1,353,223,925
I_kwDODunzps5QqI71
4,906
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
{ "avatar_url": "https://avatars.githubusercontent.com/u/63536981?v=4", "events_url": "https://api.github.com/users/OPterminator/events{/privacy}", "followers_url": "https://api.github.com/users/OPterminator/followers", "following_url": "https://api.github.com/users/OPterminator/following{/other_user}", "gists_url": "https://api.github.com/users/OPterminator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/OPterminator", "id": 63536981, "login": "OPterminator", "node_id": "MDQ6VXNlcjYzNTM2OTgx", "organizations_url": "https://api.github.com/users/OPterminator/orgs", "received_events_url": "https://api.github.com/users/OPterminator/received_events", "repos_url": "https://api.github.com/users/OPterminator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/OPterminator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OPterminator/subscriptions", "type": "User", "url": "https://api.github.com/users/OPterminator" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `datasets` library.\r\n - You could try to rename it and run it again.\r\n- Another cause could be the simultaneous use of the packages `nlp` and `datasets`. Please note that we renamed the Hugging Face `nlp` library to `datasets` more than 2 years ago: they are 2 versions of the same library.\r\n - Please try to update your script and use only `datasets` (`nlp` name is no longer in use and is out of date)." ]
2022-08-28T02:23:24Z
2022-10-03T12:22:50Z
2022-10-03T12:22:50Z
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. Not able to import datasets ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import os os.environ["WANDB_API_KEY"] = "0" ## to silence warning import numpy as np import random import sklearn import matplotlib.pyplot as plt import pandas as pd import sys import tensorflow as tf import plotly.express as px import transformers import tokenizers import nlp as nlp import utils import datasets ``` ## Expected results A clear and concise description of the expected results. import should work normal ## Actual results Specify the actual results or traceback. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-21-b3b5b0b62103> in <module> 13 import nlp as nlp 14 import utils ---> 15 import datasets ~\anaconda3\lib\site-packages\datasets\__init__.py in <module> 44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled 45 from .info import DatasetInfo, MetricInfo ---> 46 from .inspect import ( 47 get_dataset_config_info, 48 get_dataset_config_names, ~\anaconda3\lib\site-packages\datasets\inspect.py in <module> 28 from .download.streaming_download_manager import StreamingDownloadManager 29 from .info import DatasetInfo ---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory 31 from .utils.file_utils import relative_to_absolute_path 32 from .utils.logging import get_logger ~\anaconda3\lib\site-packages\datasets\load.py in <module> 53 from .iterable_dataset import IterableDataset 54 from .metric import Metric ---> 55 from .packaged_modules import ( 56 _EXTENSION_TO_MODULE, 57 _MODULE_SUPPORTS_METADATA, ~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module> 4 from typing import List 5 ----> 6 from .csv import csv 7 from .imagefolder import imagefolder 8 from .json import json ~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module> 13 14 ---> 15 logger = datasets.utils.logging.get_logger(__name__) 16 17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"] AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.4.0 - Platform: Windows-10-10.0.22000-SP0 - Python version: 3.8.8 - PyArrow version: 9.0.0 - Pandas version: 1.2.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4906/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4906/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4904/comments
https://api.github.com/repos/huggingface/datasets/issues/4904/events
https://github.com/huggingface/datasets/pull/4904
1,353,002,837
PR_kwDODunzps4959Ad
4,904
[LibriSpeech] Fix dev split local_extracted_archive for 'all' config
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "This PR fixes a bug introduced in:\r\n- #4184" ]
2022-08-27T10:04:57Z
2022-08-30T10:06:21Z
2022-08-30T10:03:25Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4904.diff", "html_url": "https://github.com/huggingface/datasets/pull/4904", "merged_at": "2022-08-30T10:03:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/4904.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4904" }
We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61 These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`. However, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L212 https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L219 The consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`. When defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L259-L263 Thus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`). This PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4904/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4904/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4903/comments
https://api.github.com/repos/huggingface/datasets/issues/4903/events
https://github.com/huggingface/datasets/pull/4903
1,352,539,075
PR_kwDODunzps494aud
4,903
Fix CI reporting
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-26T17:16:30Z
2022-08-26T17:49:33Z
2022-08-26T17:46:59Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4903.diff", "html_url": "https://github.com/huggingface/datasets/pull/4903", "merged_at": "2022-08-26T17:46:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/4903.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4903" }
Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary. This PR fixes a regression introduced by: - #4845 This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4903/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4903/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4902/comments
https://api.github.com/repos/huggingface/datasets/issues/4902/events
https://github.com/huggingface/datasets/issues/4902
1,352,469,196
I_kwDODunzps5QnQrM
4,902
Name the default config `default`
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
open
false
null
[]
null
[]
2022-08-26T16:16:22Z
2022-08-26T16:16:38Z
null
CONTRIBUTOR
null
null
null
Currently, if a dataset has no configuration, a default configuration is created from the dataset name. For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`. It might be easier to handle to set it to `default`, or another reserved word.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4902/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4902/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4901/comments
https://api.github.com/repos/huggingface/datasets/issues/4901/events
https://github.com/huggingface/datasets/pull/4901
1,352,438,915
PR_kwDODunzps494FNX
4,901
Raise ManualDownloadError from get_dataset_config_info
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-26T15:45:56Z
2022-08-30T10:42:21Z
2022-08-30T10:40:04Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4901.diff", "html_url": "https://github.com/huggingface/datasets/pull/4901", "merged_at": "2022-08-30T10:40:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4901.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4901" }
This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download. Related to: - #4898 CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4901/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4901/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4900/comments
https://api.github.com/repos/huggingface/datasets/issues/4900/events
https://github.com/huggingface/datasets/issues/4900
1,352,405,855
I_kwDODunzps5QnBNf
4,900
Dataset Viewer issue for asaxena1990/Dummy_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/56627657?v=4", "events_url": "https://api.github.com/users/ankurcl/events{/privacy}", "followers_url": "https://api.github.com/users/ankurcl/followers", "following_url": "https://api.github.com/users/ankurcl/following{/other_user}", "gists_url": "https://api.github.com/users/ankurcl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ankurcl", "id": 56627657, "login": "ankurcl", "node_id": "MDQ6VXNlcjU2NjI3NjU3", "organizations_url": "https://api.github.com/users/ankurcl/orgs", "received_events_url": "https://api.github.com/users/ankurcl/received_events", "repos_url": "https://api.github.com/users/ankurcl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ankurcl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankurcl/subscriptions", "type": "User", "url": "https://api.github.com/users/ankurcl" }
[]
open
false
null
[]
null
[ "Seems to be linked to the use of the undocumented `_resolve_features` method in the dataset viewer backend:\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"asaxena1990/Dummy_dataset\", name=\"asaxena1990--Dummy_dataset\", split=\"train\", streaming=True)\r\nUsing custom data configuration asaxena1990--Dummy_dataset-4a704ed7e5627563\r\n>>> dataset._resolve_features()\r\nFailed to read file 'https://huggingface.co/datasets/asaxena1990/Dummy_dataset/resolve/06885879a8bdd767d2d27695484fc6c83244617a/dummy_dataset_train.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column() changed from object to array in row 0\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 109, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 1261, in _resolve_features\r\n features = _infer_features_from_batch(self._head())\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 686, in _head\r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 686, in <listcomp>\r\n return _examples_to_batch([x for key, x in islice(self._iter(), n)])\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 708, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 112, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 651, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 137, in _generate_tables\r\n f\"This JSON file contain the following fields: {str(list(dataset.keys()))}. \"\r\nAttributeError: 'list' object has no attribute 'keys'\r\n```\r\n\r\nPinging @huggingface/datasets", "Hi ! JSON files containing a list of object are not supported yet, you can use JSON Lines files instead in the meantime\r\n```json\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n{\"text\": \"can I know this?\", \"intent\": \"Know\", \"type\": \"Test\"}\r\n...\r\n```" ]
2022-08-26T15:15:44Z
2022-08-26T16:48:11Z
null
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4900/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4900/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4899/comments
https://api.github.com/repos/huggingface/datasets/issues/4899/events
https://github.com/huggingface/datasets/pull/4899
1,352,031,286
PR_kwDODunzps492uTO
4,899
Re-add code and und language tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-26T09:48:57Z
2022-08-26T10:27:18Z
2022-08-26T10:24:20Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4899.diff", "html_url": "https://github.com/huggingface/datasets/pull/4899", "merged_at": "2022-08-26T10:24:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/4899.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4899" }
This PR fixes the removal of 2 language tags done by: - #4882 The tags are: - "code": this is not a IANA tag but needed - "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af - used in "mc4" and "udhr" datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4899/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4899/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4898/comments
https://api.github.com/repos/huggingface/datasets/issues/4898/events
https://github.com/huggingface/datasets/issues/4898
1,351,851,254
I_kwDODunzps5Qk5z2
4,898
Dataset Viewer issue for timit_asr
{ "avatar_url": "https://avatars.githubusercontent.com/u/91126978?v=4", "events_url": "https://api.github.com/users/InayatUllah932/events{/privacy}", "followers_url": "https://api.github.com/users/InayatUllah932/followers", "following_url": "https://api.github.com/users/InayatUllah932/following{/other_user}", "gists_url": "https://api.github.com/users/InayatUllah932/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/InayatUllah932", "id": 91126978, "login": "InayatUllah932", "node_id": "MDQ6VXNlcjkxMTI2OTc4", "organizations_url": "https://api.github.com/users/InayatUllah932/orgs", "received_events_url": "https://api.github.com/users/InayatUllah932/received_events", "repos_url": "https://api.github.com/users/InayatUllah932/repos", "site_admin": false, "starred_url": "https://api.github.com/users/InayatUllah932/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/InayatUllah932/subscriptions", "type": "User", "url": "https://api.github.com/users/InayatUllah932" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/timit_asr/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2/timit_asr.py\", line 117, in _split_generators\r\n data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/posixpath.py\", line 231, in expanduser\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\ncc @huggingface/datasets ", "Due to license restriction, this dataset needs manual downloading of the original data.\r\n\r\nThis information is in the dataset card: https://huggingface.co/datasets/timit_asr\r\n> The dataset needs to be downloaded manually from https://catalog.ldc.upenn.edu/LDC93S1", "Maybe a better error message for datasets that need manual downloading? @severo \r\n\r\nMaybe we can raise a specific excpetion as done from `load_dataset`...", "Yes, ideally something like https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L81\r\n", "The preview is now disabled (and a descriptive warning is displayed) for datasets requiring manual download. See:\r\n\r\n![timit_asr-manual-download](https://user-images.githubusercontent.com/8515462/193578572-3d21b790-f848-4257-9e9b-7cab3d76a269.png)\r\n" ]
2022-08-26T07:12:05Z
2022-10-03T12:40:28Z
2022-10-03T12:40:27Z
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4898/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4898/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4897/comments
https://api.github.com/repos/huggingface/datasets/issues/4897/events
https://github.com/huggingface/datasets/issues/4897
1,351,784,727
I_kwDODunzps5QkpkX
4,897
datasets generate large arrow file
{ "avatar_url": "https://avatars.githubusercontent.com/u/18533904?v=4", "events_url": "https://api.github.com/users/osayes/events{/privacy}", "followers_url": "https://api.github.com/users/osayes/followers", "following_url": "https://api.github.com/users/osayes/following{/other_user}", "gists_url": "https://api.github.com/users/osayes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osayes", "id": 18533904, "login": "osayes", "node_id": "MDQ6VXNlcjE4NTMzOTA0", "organizations_url": "https://api.github.com/users/osayes/orgs", "received_events_url": "https://api.github.com/users/osayes/received_events", "repos_url": "https://api.github.com/users/osayes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osayes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osayes/subscriptions", "type": "User", "url": "https://api.github.com/users/osayes" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?", "@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 times in size. \r\nI think maybe it doesn' matter, it's just cache after all." ]
2022-08-26T05:51:16Z
2022-09-18T05:07:52Z
2022-09-18T05:07:52Z
NONE
null
null
null
Checking the large file in disk, and found the large cache file in the cifar10 data directory: ![image](https://user-images.githubusercontent.com/18533904/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png) As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4897/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4896/comments
https://api.github.com/repos/huggingface/datasets/issues/4896/events
https://github.com/huggingface/datasets/pull/4896
1,351,180,409
PR_kwDODunzps49z4fU
4,896
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-25T16:41:43Z
2022-09-22T14:37:16Z
2022-08-26T04:41:48Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4896.diff", "html_url": "https://github.com/huggingface/datasets/pull/4896", "merged_at": "2022-08-26T04:41:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4896.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4896" }
Fix missing tags in dataset cards: - anli - coarse_discourse - commonsense_qa - cos_e - ilist - lc_quad - web_questions - xsum This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4896/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4896/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4895/comments
https://api.github.com/repos/huggingface/datasets/issues/4895/events
https://github.com/huggingface/datasets/issues/4895
1,350,798,527
I_kwDODunzps5Qg4y_
4,895
load_dataset method returns Unknown split "validation" even if this dir exists
{ "avatar_url": "https://avatars.githubusercontent.com/u/13418507?v=4", "events_url": "https://api.github.com/users/SamSamhuns/events{/privacy}", "followers_url": "https://api.github.com/users/SamSamhuns/followers", "following_url": "https://api.github.com/users/SamSamhuns/following{/other_user}", "gists_url": "https://api.github.com/users/SamSamhuns/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SamSamhuns", "id": 13418507, "login": "SamSamhuns", "node_id": "MDQ6VXNlcjEzNDE4NTA3", "organizations_url": "https://api.github.com/users/SamSamhuns/orgs", "received_events_url": "https://api.github.com/users/SamSamhuns/received_events", "repos_url": "https://api.github.com/users/SamSamhuns/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SamSamhuns/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamSamhuns/subscriptions", "type": "User", "url": "https://api.github.com/users/SamSamhuns" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n", "@SamSamhuns could you please try to load it with the current main-branch version of `datasets`? I suppose the problem is that it tries to get splits names from filenames in this case, ignoring directories names, but `val` wasn't in keywords at that time, but it was fixed recently in this PR https://github.com/huggingface/datasets/pull/4844. ", "I have a similar problem.\r\nWhen I try to create `data_infos.json` using `datasets-cli test Peter.py --save_infos --all_configs` I get an error:\r\n`ValueError: Unknown split \"test\". Should be one of ['train'].`\r\n\r\nThe `data_infos.json` is created perfectly fine when I use only one split - `datasets.Split.TRAIN`\r\n\r\n@polinaeterna Could you help here please?\r\n\r\nYou can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)", "@skalinin It seems the `dataset_infos.json` of your dataset is missing the info on the test split (and `datasets-cli` doesn't ignore the cached infos at the moment, which is a known bug), so your issue is not related to this one. I think you can fix your issue by deleting all the cached `dataset_infos.json` (in the local repo and in `~/.cache/huggingface/modules`) before running the `datasets-cli test` command. Let us know if that doesn't help, and I can try to generate it myself.", "This code indeed behaves as expected on `main`. But suppose the `val_234.png` is renamed to some other value not containing one of [these](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L31) keywords, in that case, this issue becomes relevant again because the real cause of it is the order in which we check the predefined split patterns to assign data files to each split - first we assign data files based on filenames, and only if this fails meaning not a single split found (`val` is not recognized here in the older versions of `datasets`, which results in an empty `validation` split), do we assign based on directory names.\r\n\r\n@polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if `data_dir` is specified (or if `load_dataset(data_dir)` is called)? ", "> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nyes that makes sense !", "Looks like the `val/validation` dir name issue is fixed with the current main-branch version of the `datasets` repository. \r\n\r\n> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nI agree with this as well. I would expect higher precedence to the directory name over the file name. Right now if I place a single file named `train_00001.jpg` under the `validation` directory, `load_dataset` cannot find the validation split.", "Thanks for the reply\r\n\r\nI've created a separate [issue](https://github.com/huggingface/datasets/issues/4982#issue-1375604693) for my problem.", "> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nSounds good to me! opened a PR: https://github.com/huggingface/datasets/pull/4985", "Hi there @polinaeterna @mariosasko ! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!", "hi @shaneacton ! could you please show your dataset structure?", "Hi there @polinaeterna . My local CSV files are stored as follows:\r\nbinding:\r\n---------- tune.csv\r\n---------- public_data:\r\n--------------------------- train.csv\r\n\r\n`self.list_shards(split)` sucessfully finds the relevant data files", "@shaneacton do you have `validation.csv`/`val.csv`/`valid.csv`/`dev.csv` file in your data folder? I can't find it in the structure you provided", "@polinaeterna no, does the name of the split need to match the name of the file exactly?\r\n\r\nBut my train file is not actually named 'train.py' its called 'XXXXXXXXX_train_XXXXXXXX.csv'\r\nAnd the code works fine for train, but fails for validation.\r\nDoes the file name need to _contain_ the split name?", "@shaneacton what files do you expect to be included in \"validation\" split? yes, you should somehow indicate that a file belongs to a certain split - either by including split name in a filename or by putting it into a folder with split name, you can also check out [this documentation page](https://huggingface.co/docs/datasets/main/en/repository_structure) :)\r\nby default all the data goes to a single `train` split", "@polinaeterna I have specified my train/test/tune files via the `split_to_filepattern` argument when initialising my `FileDataSource` class. This is how `list_shards` is able to find the right files.\r\nAfter your last message, I have tried renaminig my data files to simply `train.csv` and `validation.csv`, however I am still getting the same error: `Unknown split \"validation\". Should be one of ['train']`", "@polinaeterna I have solved the issue. The solution was to call:\r\n`load_dataset(\"csv\", data_files={split: files}, split=split)`" ]
2022-08-25T12:11:00Z
2022-10-06T17:49:28Z
2022-09-29T08:07:50Z
NONE
null
null
null
## Describe the bug The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path. The data directories are as follows and attached to this issue: ``` test_data1 |_ train |_ 1012.png |_ metadata.jsonl ... |_ test ... |_ validation |_ 234.png |_ metadata.jsonl ... test_data2 |_ train |_ train_1012.png |_ metadata.jsonl ... |_ test ... |_ validation |_ val_234.png |_ metadata.jsonl ... ``` They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e. `train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png` I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split? ## Steps to reproduce the bug ```python import datasets datasets.logging.set_verbosity_error() from datasets import load_dataset, get_dataset_split_names # the following only finds train, validation and test splits correctly path = "./test_data1" print("######################", get_dataset_split_names(path), "######################") dataset_list = [] for spt in ["train", "test", "validation"]: dataset = load_dataset(path, split=spt) dataset_list.append(dataset) # the following only finds train and test splits path = "./test_data2" print("######################", get_dataset_split_names(path), "######################") dataset_list = [] for spt in ["train", "test", "validation"]: dataset = load_dataset(path, split=spt) dataset_list.append(dataset) ``` ## Expected results ``` ###################### ['train', 'test', 'validation'] ###################### ###################### ['train', 'test', 'validation'] ###################### ``` ## Actual results ``` Traceback (most recent call last): File "test_data_loader.py", line 11, in <module> dataset = load_dataset(path, split=spt) File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset datasets = map_nested( File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested return function(data_struct) File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset ds = self._as_dataset( File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset dataset_kwargs = ArrowReader(self._cache_dir, self.info).read( File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read files = self.get_file_instructions(name, instructions, split_infos) File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions file_instructions = make_file_instructions( File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions absolute_instructions = instruction.to_absolute(name2len) File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions] File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp> return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions] File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.') ValueError: Unknown split "validation". Should be one of ['train', 'test']. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux Ubuntu 18.04 - Python version: 3.8.12 - PyArrow version: 9.0.0 Data files [test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip) [test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4895/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4895/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4894/comments
https://api.github.com/repos/huggingface/datasets/issues/4894/events
https://github.com/huggingface/datasets/pull/4894
1,350,667,270
PR_kwDODunzps49yIvr
4,894
Add citation information to makhzan dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-25T10:16:40Z
2022-08-30T06:21:54Z
2022-08-25T13:19:41Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4894.diff", "html_url": "https://github.com/huggingface/datasets/pull/4894", "merged_at": "2022-08-25T13:19:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4894.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4894" }
This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information: - https://github.com/zeerakahmed/makhzan/issues/43
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4894/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4894/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4893/comments
https://api.github.com/repos/huggingface/datasets/issues/4893/events
https://github.com/huggingface/datasets/issues/4893
1,350,655,674
I_kwDODunzps5QgV66
4,893
Oversampling strategy for iterable datasets in `interleave_datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues", "id": 3761482852, "name": "good second issue", "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ylacombe", "id": 52246514, "login": "ylacombe", "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "repos_url": "https://api.github.com/users/ylacombe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "type": "User", "url": "https://api.github.com/users/ylacombe" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ylacombe", "id": 52246514, "login": "ylacombe", "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "repos_url": "https://api.github.com/users/ylacombe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "type": "User", "url": "https://api.github.com/users/ylacombe" } ]
null
[ "Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n", "Great @ylacombe thanks ! I'm assigning you this issue", "Hi @ylacombe :) Is there anything I can do to help ? Feel free to ping me if you have any question :)", "Hi @lhoestq,\r\n\r\nI actually have already wrote the code last time [on this commit](https://github.com/ylacombe/datasets/commit/84769db97facc78a33ec53f7b1b395951e1804df) but I still have to change the docs and write some tests though. I'm working on it.\r\n\r\nHowever, I still your advice on one matter. \r\nIn #4831, when using a `Dataset` list with probabilities, I had change the original behavior so that it stops as soon as one or all datasets are out of samples. By nature, this behavior can't be applied with an `IterableDataset` because one only knows an iterable dataset is out of sample when receiving a StopIteration error after calling the iterator once again. \r\nTo sum up, as it is right know, the behavior is not consistent with an `IterableDataset` list or a `Dataset` list, when using probabilities.\r\nTo be honest, I think that the current behavior with a `Dataset` list is desirable and avoid having too many samples, so I would recommand keeping that as it is, but I can understand the desire to have the same behavior for both classes. \r\nWhat do you think ? Please let me know if you need more details.\r\n\r\n\r\nEDIT:\r\nHere is an example:\r\n```\r\n>>> from tests.test_iterable_dataset import *\r\n>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\n>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\n>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n>>> [x[\"a\"] for x in dataset]\r\n[10, 0, 11, 1, 2, 20, 12, 13]\r\n>>> from tests.test_arrow_dataset import *\r\n>>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n>>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n>>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n>>> interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)[\"a\"]\r\n[10, 0, 11, 1, 2]\r\n[10, 0, 11, 1, 2]\r\n```\r\n ", "Hi ! Awesome :) \r\n\r\nMaybe you can pre-load the next sample to know if the dataset is empty or not ?\r\nThis way it should be possible to have the same behavior for `IterableDataset`", "Hi @ylacombe let us know if we can help with anything :)", "Hi @lhoestq, I've finally made some advances in the matter. I've modified the `IterableDataset` behavior so that it aligns with the `Dataset` behavior as we have discussed. The documentation has been dealt with too. \r\nIt works as expected on my examples. However I'm having trouble figuring out how to test `interleave_datasets` on `test_iterable_datasets.py` as I have never worked with pytest. Could you help me on that or give me some indications? \r\n", "Thanks @ylacombe :)\r\n\r\nUsing the `pytest` command, you can run all the functions in a python file that start with \"test_*\" and make sure they return not errors:\r\n```\r\npytest tests/test_iterable_dataset.py\r\n```\r\n\r\nIn our case it can be nice to define a `test_interleave_datasets_with_oversampling` function. This function can contain the code example that we mentioned earlier in this github issue to make sure it works as expected.", "Resolved via #5036." ]
2022-08-25T10:06:55Z
2022-10-03T12:37:46Z
2022-10-03T12:37:46Z
MEMBER
null
null
null
In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects. It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy ```python >>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable >>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {})) >>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {})) >>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {})) >>> dataset = interleave_datasets([d1, d2, d3]) # is supported >>> [x["a"] for x in dataset] [0, 10, 20, 1, 11, 21, 2, 12, 22] >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet >>> [x["a"] for x in dataset] [0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24] ``` This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py` I would be happy to share some guidance if anyone would like to give it a shot :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4893/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4893/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4892/comments
https://api.github.com/repos/huggingface/datasets/issues/4892/events
https://github.com/huggingface/datasets/pull/4892
1,350,636,499
PR_kwDODunzps49yCD3
4,892
Add citation to ro_sts and ro_sts_parallel datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4892). All of your documentation changes will be reflected on that endpoint." ]
2022-08-25T09:51:06Z
2022-08-25T10:49:56Z
2022-08-25T10:49:56Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4892.diff", "html_url": "https://github.com/huggingface/datasets/pull/4892", "merged_at": "2022-08-25T10:49:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/4892.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4892" }
This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information: - https://github.com/dumitrescustefan/RO-STS/issues/4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4892/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4892/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4891/comments
https://api.github.com/repos/huggingface/datasets/issues/4891/events
https://github.com/huggingface/datasets/pull/4891
1,350,589,813
PR_kwDODunzps49x382
4,891
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2022-08-25T09:14:17Z
2022-09-22T14:39:02Z
2022-08-25T13:43:34Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4891.diff", "html_url": "https://github.com/huggingface/datasets/pull/4891", "merged_at": "2022-08-25T13:43:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/4891.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4891" }
Fix missing tags in dataset cards: - aslg_pc12 - librispeech_lm - mwsc - opus100 - qasc - quail - squadshifts - winograd_wsc This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4891/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4891/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4890/comments
https://api.github.com/repos/huggingface/datasets/issues/4890/events
https://github.com/huggingface/datasets/pull/4890
1,350,578,029
PR_kwDODunzps49x1YC
4,890
add Dataset.from_list
{ "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanderland", "id": 48946947, "login": "sanderland", "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "organizations_url": "https://api.github.com/users/sanderland/orgs", "received_events_url": "https://api.github.com/users/sanderland/received_events", "repos_url": "https://api.github.com/users/sanderland/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "type": "User", "url": "https://api.github.com/users/sanderland" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova it seems tests fail on pyarrow 6, perhaps from_pylist is a v7 method? How do you usually handle these version differences?\r\nAdded something that at least works" ]
2022-08-25T09:05:58Z
2022-09-02T10:22:59Z
2022-09-02T10:20:33Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4890.diff", "html_url": "https://github.com/huggingface/datasets/pull/4890", "merged_at": "2022-09-02T10:20:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/4890.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4890" }
As discussed in #4885 I initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict. However, it seems the constructor takes care of filling info when it is empty. ``` if info.features is None: info.features = Features( { col: generate_from_arrow_type(coldata.type) for col, coldata in zip(pa_table.column_names, pa_table.columns) } ) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4890/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4890/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4889/comments
https://api.github.com/repos/huggingface/datasets/issues/4889/events
https://github.com/huggingface/datasets/issues/4889
1,349,758,525
I_kwDODunzps5Qc649
4,889
torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.", "torchaudio did a change in [0.12](https://github.com/pytorch/audio/releases/tag/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 decoding is now handled by FFmpeg in sox_io backend. (https://github.com/pytorch/audio/pull/2419, https://github.com/pytorch/audio/pull/2428)\r\n> - FFmpeg is now used as fallback in sox_io backend, and now MP3 decoding is handled by FFmpeg. To load MP3 audio with torchaudio.load, please install a compatible version of FFmpeg (Version 4 when using an official binary distribution).\r\n> - Note that, whereas the previous MP3 decoding scheme pads the output audio, the new scheme does not. As a consequence, the new version returns shorter audio tensors.", "Do we have a solution for this now? Should we just upgrade to `torchaudio 0.12.0` then? ", "`datasets` supports `torchaudio` 0.12 if you have an environment that supports reading MP3 with `torchaudio`, i.e. if you have `ffmpeg>=4`" ]
2022-08-24T16:54:43Z
2022-10-05T13:54:04Z
null
MEMBER
null
null
null
## Describe the bug When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749 ## Steps to reproduce the bug If you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers. ```python #!/usr/bin/env python3 from datasets import load_dataset import datasets import numpy as np import torch import torchaudio print("torch vesion", torch.__version__) print("torchaudio vesion", torchaudio.__version__) save_audio = True load_audios = False if save_audio: ds = load_dataset("common_voice", "en", split="train", streaming=True) ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000)) ds_iter = iter(ds) sample = next(ds_iter) np.save(f"audio_sample_{torch.__version__}", sample["audio"]["array"]) print(sample["audio"]["array"]) if load_audios: array_torch_11 = np.load("/home/patrick/audio_sample_1.11.0+cu102.npy") print("Array 11 Shape", array_torch_11.shape) print("Array 11 abs sum", np.sum(np.abs(array_torch_11))) array_torch_12 = np.load("/home/patrick/audio_sample_1.12.1+cu102.npy") print("Array 12 Shape", array_torch_12.shape) print("Array 12 abs sum", np.sum(np.abs(array_torch_12))) ``` Having saved the tensors the print output yields: ``` torch vesion 1.12.1+cu102 torchaudio vesion 0.12.1+cu102 Array 11 Shape (122880,) Array 11 abs sum 1396.4988 Array 12 Shape (123264,) Array 12 abs sum 1396.5193 ``` ## Expected results torchaudio 11.0 and 12.1 should yield same results. ## Actual results See above. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.1.dev0 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4889/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4889/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4888/comments
https://api.github.com/repos/huggingface/datasets/issues/4888/events
https://github.com/huggingface/datasets/issues/4888
1,349,447,521
I_kwDODunzps5Qbu9h
4,888
Dataset Viewer issue for subjqa
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it.", "Fixed \r\n\r\nhttps://huggingface.co/datasets/subjqa\r\n\r\n<img width=\"1040\" alt=\"Capture d’écran 2022-09-08 à 10 23 26\" src=\"https://user-images.githubusercontent.com/1676121/189073210-2a57ff88-8bb1-44bd-851e-0e75473cea3f.png\">\r\n" ]
2022-08-24T13:26:20Z
2022-09-08T08:23:42Z
2022-09-08T08:23:42Z
MEMBER
null
null
null
### Link https://huggingface.co/datasets/subjqa ### Description Getting the following error for this dataset: ``` Status code: 500 Exception: Status500Error Message: 2 or more items returned, instead of 1 ``` Not sure what's causing it though 🤔 ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4888/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4887/comments
https://api.github.com/repos/huggingface/datasets/issues/4887/events
https://github.com/huggingface/datasets/pull/4887
1,349,426,693
PR_kwDODunzps49t_PM
4,887
Add "cc-by-nc-sa-2.0" to list of licenses
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry for the issue @albertvillanova! I think it's now fixed! :heart: " ]
2022-08-24T13:11:49Z
2022-08-26T10:31:32Z
2022-08-26T10:29:20Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4887.diff", "html_url": "https://github.com/huggingface/datasets/pull/4887", "merged_at": "2022-08-26T10:29:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/4887.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4887" }
Datasets side of https://github.com/huggingface/hub-docs/pull/285
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4887/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4887/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4886/comments
https://api.github.com/repos/huggingface/datasets/issues/4886/events
https://github.com/huggingface/datasets/issues/4886
1,349,285,569
I_kwDODunzps5QbHbB
4,886
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
{ "avatar_url": "https://avatars.githubusercontent.com/u/11850255?v=4", "events_url": "https://api.github.com/users/JeanKaddour/events{/privacy}", "followers_url": "https://api.github.com/users/JeanKaddour/followers", "following_url": "https://api.github.com/users/JeanKaddour/following{/other_user}", "gists_url": "https://api.github.com/users/JeanKaddour/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JeanKaddour", "id": 11850255, "login": "JeanKaddour", "node_id": "MDQ6VXNlcjExODUwMjU1", "organizations_url": "https://api.github.com/users/JeanKaddour/orgs", "received_events_url": "https://api.github.com/users/JeanKaddour/received_events", "repos_url": "https://api.github.com/users/JeanKaddour/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JeanKaddour/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JeanKaddour/subscriptions", "type": "User", "url": "https://api.github.com/users/JeanKaddour" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi! IIRC one of the files in this dataset is corrupted due to https://github.com/huggingface/datasets/pull/4081 (fixed now).\r\n\r\n@NielsRogge Could you please re-generate and re-push this dataset (or I can do it if you share the generation script)?" ]
2022-08-24T11:24:21Z
2022-09-08T16:29:04Z
null
NONE
null
null
null
## Describe the bug Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('huggan/CelebA-HQ') ``` ## Expected results See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd ## Actual results ``` File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module> dataset = load_dataset('huggan/CelebA-HQ') File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset builder_instance.download_and_prepare( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split for key, table in logging.tqdm( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables parquet_file = pq.ParquetFile(f) File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-2.4.1.dev0 - Platform: Ubuntu 18.04 - Python version: 3.10 - PyArrow version: pyarrow 9.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4886/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4886/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4885/comments
https://api.github.com/repos/huggingface/datasets/issues/4885/events
https://github.com/huggingface/datasets/issues/4885
1,349,181,448
I_kwDODunzps5QauAI
4,885
Create dataset from list of dicts
{ "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanderland", "id": 48946947, "login": "sanderland", "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "organizations_url": "https://api.github.com/users/sanderland/orgs", "received_events_url": "https://api.github.com/users/sanderland/received_events", "repos_url": "https://api.github.com/users/sanderland/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "type": "User", "url": "https://api.github.com/users/sanderland" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi @sanderland, thanks for your enhancement proposal.\r\n\r\nI agree with you that this would be useful.\r\n\r\nPlease note that under the hood, we use PyArrow tables as backend:\r\n- The implementation of `Dataset.from_dict` uses the PyArrow `Table.from_pydict`\r\n\r\nTherefore, I would suggest:\r\n- Implementing `Dataset.from_list` using the PyArrow `Table.from_pylist`\r\n\r\nWhat do you think?\r\nLet's see if other people have other suggestions...", "Thanks for the quick and positive reply @albertvillanova! \r\n`from_list` seems sensible. Have opened a PR so we can discuss details there.", "Resolved via #4890." ]
2022-08-24T10:01:24Z
2022-09-08T16:02:52Z
2022-09-08T16:02:52Z
CONTRIBUTOR
null
null
null
I often find myself with data from a variety of sources, and a list of dicts is very common among these. However, converting this to a Dataset is a little awkward, requiring either ```Dataset.from_pandas(pd.DataFrame(formatted_training_data))``` Which can error out on some more exotic values as 2-d arrays for reasons that are not entirely clear > ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column labels with type object') Alternatively: ```Dataset.from_dict({k: [s[k] for s in formatted_training_data] for k in formatted_training_data[0].keys()})``` Which works, but is a little ugly. **Describe the solution you'd like** Either `.from_dict` accepting a list of dicts, or a `.from_records` function accepting such. I am happy to PR this, just wanted to check you are happy to accept this I haven't missed something obvious, and which of the solutions would be prefered.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4885/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4885/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4884/comments
https://api.github.com/repos/huggingface/datasets/issues/4884/events
https://github.com/huggingface/datasets/pull/4884
1,349,105,946
PR_kwDODunzps49s6Aj
4,884
Fix documentation card of math_qa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4884). All of your documentation changes will be reflected on that endpoint." ]
2022-08-24T09:00:56Z
2022-08-24T11:33:17Z
2022-08-24T11:33:16Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4884.diff", "html_url": "https://github.com/huggingface/datasets/pull/4884", "merged_at": "2022-08-24T11:33:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/4884.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4884" }
Fix documentation card of math_qa dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4884/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4884/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4883/comments
https://api.github.com/repos/huggingface/datasets/issues/4883/events
https://github.com/huggingface/datasets/issues/4883
1,349,083,235
I_kwDODunzps5QaWBj
4,883
With dataloader RSS memory consumed by HF datasets monotonically increases
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apsdehal", "id": 3616806, "login": "apsdehal", "node_id": "MDQ6VXNlcjM2MTY4MDY=", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "repos_url": "https://api.github.com/users/apsdehal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "type": "User", "url": "https://api.github.com/users/apsdehal" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Are you sure there is a leak? How can I see it? You shared the script but not the output which you believe should indicate a leak.\r\n\r\nI modified your reproduction script to print only once per try as your original was printing too much info and you absolutely must add `gc.collect()` when doing any memory measurements, since python's GC is scheduled so you might be measuring the wrong thing. This gives us:\r\n\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nfrom transformers import BertTokenizer\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\nBATCH_SIZE = 32\r\nNUM_TRIES = 100\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndef transform(x):\r\n x.update(tokenizer(x[\"text\"], return_tensors=\"pt\", max_length=64, padding=\"max_length\", truncation=True))\r\n x.pop(\"text\")\r\n x.pop(\"label\")\r\n return x\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\ndataset.set_transform(transform)\r\ntrain_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n\r\ncount = 0\r\nwhile count < NUM_TRIES:\r\n for idx, batch in enumerate(train_loader): pass\r\n gc.collect()\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(count, mem_after - mem_before)\r\n count += 1\r\n```\r\n\r\nNow running it:\r\n\r\n```\r\n$ python dl-leak.py \r\nReusing dataset imdb (/home/stas/.cache/huggingface/datasets/imdb/plain_text/1.0.0/2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1)\r\n0 4.43359375\r\n1 4.4453125\r\n2 4.44921875\r\n3 4.44921875\r\n4 4.4609375\r\n5 4.46484375\r\n6 4.46484375\r\n7 4.46484375\r\n8 4.46484375\r\n9 4.46484375\r\n10 4.46484375\r\n11 4.46484375\r\n12 4.46484375\r\n13 4.46484375\r\n14 4.46484375\r\n15 4.46484375\r\n16 4.46484375\r\n```\r\n\r\nIt's normal that at the beginning there is a small growth in memory usage, but after 5 cycles it gets steady.", "Unless of course you're referring the memory growth during the first try. Is that what you're referring to? And since your ds is small it's hard to see the growth - could it be just because some records are longer and it needs to allocate more memory for those?\r\n\r\nThough while experimenting with this I have observed a peculiar thing, if I concatenate 2 datasets, I don't see any growth at all. But that's probably because the program allocated additional peak RSS memory to concatenate and then is re-using the memory\r\n\r\nI basically tried to see if I make the dataset much longer, I'd expect not to see any memory growth once the 780 records of the imdb ds have been processed once.", "It is hard to say if it is directly reproducible in this setup. Maybe it is specific to the images stored in the CM4 case which cause a memory leak. I am still running your script and seeing if I can reproduce that particular leak in this case.", "I was able to reproduce the leak with:\r\n\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nfrom datasets import load_from_disk\r\nimport time\r\n\r\nDATASET_PATH = \"/hf/m4-master/data/cm4/cm4-10000-v0.1\"\r\n\r\ndataset = load_from_disk(DATASET_PATH)\r\n\r\n# truncate to a tiny dataset\r\ndataset = dataset.select(range(1000))\r\n\r\nprint(f\"dataset: {len(dataset)} records\")\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\nfor idx, rec in enumerate(dataset):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\nYou need to adjust the DATASET_PATH record.\r\n\r\nwhich you get from\r\n\r\n```\r\ngsutil -m cp \"gs://hf-science-m4/cm4/cm4-10000-v0.1/dataset.arrow\" \"gs://hf-science-m4/cm4/cm4-10000-v0.1/dataset_info.json\" \"gs://hf-science-m4/cm4/cm4-10000-v0.1/state.json\" .\r\n```\r\n(I assume the hf folks have the perms) - it's a smallish dataset (10k)\r\n\r\nthen you run:\r\n```\r\n$ python ds.py\r\ndataset: 1000 records\r\n 0 1.0156MB\r\n 100 126.3906MB\r\n 200 142.8906MB\r\n 300 168.5586MB\r\n 400 218.3867MB\r\n 500 230.7070MB\r\n 600 238.9570MB\r\n 700 263.3789MB\r\n 800 288.1289MB\r\n 900 300.5039MB\r\n```\r\n\r\nyou should be able to see the leak ", "This issue has nothing to do with `PIL`'s decoder. I removed it and the problem is still there.\r\n\r\nI then traced this leak to this single call: `pa_table.to_pydict()` here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/08a7b389cdd6fb49264a72aa8ccfc49a233494b6/src/datasets/formatting/formatting.py#L138-L140\r\n\r\nI can make it leak much faster by modifying that code to repeat `pa_table.to_pydict()` many times in a row. It shouldn't have that impact:\r\n\r\n```\r\nclass PythonArrowExtractor(BaseArrowExtractor[dict, list, dict]):\r\n def extract_row(self, pa_table: pa.Table) -> dict:\r\n x = [pa_table.to_pydict() for x in range(200)]\r\n return _unnest(pa_table.to_pydict())\r\n```\r\n\r\n@lhoestq - do you know what might be happening inside `pa_table.to_pydict()`, as this is in the `pyarrow` domain. Perhaps you know someone to tag from that project?\r\n\r\nProbably next need to remove `datasets` from the equation and make a reproducible case with just `pyarrow` directly.\r\n\r\nThe problem already happens with `pyarrow==6.0.0` or later (minimum for current `datasets`)\r\n\r\nI'm also trying to dig in with `objgraph` to see if there are any circular references which prevent objects from being freed, but no luck there so far. And I'm pretty sure `to_pydict` is not a python code, so the problem is likely to happen somewhere outside of python's GC.", "This appears to be the same issue I think: https://github.com/huggingface/datasets/issues/4528\r\nI dug into the repro code there and it's the same behavior with the same leak, but it's a pure nlp dataset and thus much faster to work with. \r\n", "I went all the way back to `pyarrow==1.0.0` and `datasets==1.12.0` and the problem is still there. How is it even possible that it wasn't noticed all this time. \r\n\r\nCould it be that the leak is in some 3rd party component `pyarrow` relies on? as while downgrading I have only downgraded the above 2 packages.\r\n", "Also found this warning \r\n\r\n> Be careful: if you don't pass the ArrowArray struct to a consumer,\r\n> array memory will leak. This is a low-level function intended for\r\n> expert users.\r\n\r\nsee: https://github.com/apache/arrow/blob/99b57e84277f24e8ec1ddadbb11ef8b4f43c8c89/python/pyarrow/table.pxi#L2515-L2517\r\n\r\nperhaps something triggers this condition?\r\n\r\nI have no idea if it's related - this is just something that came up during my research.", "Does it crash with OOM at some point? If it doesn't, it isn't a leak, just agressive caching or a custom allocator that doesn't like to give memory back (not uncommon). #4528 looks like it hits a steady state.\r\n\r\nI believe the underlying arrow libs use a custom C allocator. Some of those are designed not to give back to OS, but keep heap memory for themselves to re-use (hitting up the OS involves more expensive mutex locks, contention, etc). The greedy behaviour can be undesirable though. There are likely flags to change the allocator behaviour, and one could likely build without any custom allocators (or use a different one).", "> Does it crash with OOM at some point?\r\n\r\nIn the original setup where we noticed this problem, it was indeed ending in an OOM", "> https://github.com/huggingface/datasets/issues/4528 looks like it hits a steady state.\r\n\r\n@rwightman in the plot I shared, the steady state comes from the `time.sleep(100)` I added in the end of the script, to showcase that even the garbage collector couldn't free that allocated memory.\r\n", "Could this be related to this discussion about a potential memory leak in pyarrow: https://issues.apache.org/jira/browse/ARROW-11007 ?\r\n\r\n(Note: I've tried `import pyarrow; pyarrow.jemalloc_set_decay_ms(0)` and the memory leak is still happening on your toy example)", "> @lhoestq - do you know what might be happening inside pa_table.to_pydict(), as this is in the pyarrow domain. Perhaps you know someone to tag from that project?\r\n\r\n`to_pydict` calls `to_pylist` on each column (i.e. on each PyArrow Array). Then it iterates on the array and calls `as_py` on each element. The `as_py` implementation depends on the data type. For strings I think it simply gets the buffer that contains the binary string data that is defined in C++\r\n\r\nThe Arrow team is pretty responsive at user@arrow.apache.org if it can help\r\n\r\n> Probably next need to remove datasets from the equation and make a reproducible case with just pyarrow directly.\r\n\r\nThat would be ideal indeed. Would be happy to help on this, can you give me access to the bucket so I can try with your data ?", "> That would be ideal indeed. Would be happy to help on this, can you give me access to the bucket so I can try with your data ?\r\n\r\nI added you to the bucket @lhoestq ", "It looks like an issue with memory mapping:\r\n- the amount of memory used in the end corresponds to the size of the dataset\r\n- setting `keep_in_memory=True` in `load_from_disk` loads the dataset in RAM, and **doesn't cause any memory leak**", "Here is a code to reproduce this issue using only PyArrow and a dummy arrow file:\r\n```python\r\nimport psutil\r\nimport os\r\nimport gc\r\nimport pyarrow as pa\r\nimport time\r\n\r\nARROW_PATH = \"tmp.arrow\"\r\n\r\nif not os.path.exists(ARROW_PATH):\r\n arr = pa.array([b\"a\" * (200 * 1024)] * 1000) # ~200MB\r\n table = pa.table({\"a\": arr})\r\n\r\n with open(ARROW_PATH, \"wb\") as f:\r\n writer = pa.RecordBatchStreamWriter(f, schema=table.schema)\r\n writer.write_table(table)\r\n writer.close()\r\n\r\n\r\ndef memory_mapped_arrow_table_from_file(filename: str) -> pa.Table:\r\n memory_mapped_stream = pa.memory_map(filename)\r\n opened_stream = pa.ipc.open_stream(memory_mapped_stream)\r\n pa_table = opened_stream.read_all()\r\n return pa_table\r\n\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr = table[0]\r\n\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\nfor idx, x in enumerate(arr):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\nprints\r\n```\r\n 0 0.2500MB\r\n 100 19.8008MB\r\n 200 39.3320MB\r\n 300 58.8633MB\r\n 400 78.3945MB\r\n 500 97.9258MB\r\n 600 117.4570MB\r\n 700 136.9883MB\r\n 800 156.5195MB\r\n 900 176.0508MB\r\n```\r\nNote that this example simply iterates over the `pyarrow.lib.BinaryScalar` objects in the array. Running `.as_py()` is not needed to experience the memory issue.", "@lhoestq that does indeed increase in memory, but if you iterate over array again after the first time, or re-open and remap the same file (repeat `table = memory_mapped_arrow_table_from_file(ARROW_PATH)`) before re-iterating, it doesn't move pas 195MB.... it would appear another step is needed to continue consuming memory past that.. hmmm\r\n\r\nAre the pa_tables held on to anywhere after they are iterated in the real code?\r\n\r\nin my hack, if you do a bunch cut & paste and then change the arr name for each iter \r\n\r\n```\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr = table[0]\r\n\r\nfor idx, x in enumerate(arr):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr1 = table[0]\r\n\r\nfor idx, x in enumerate(arr1):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n\r\ntable = memory_mapped_arrow_table_from_file(ARROW_PATH)\r\narr2 = table[0]\r\n\r\nfor idx, x in enumerate(arr2):\r\n if idx % 100 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB\")\r\n```\r\n\r\nit leaks, if all arr are the same name (so prev one gets cleaned up) it does not and goes back to 0, anything that could be holding onto a reference of an intermediary equivalent like arr in the real use case?\r\n\r\n\r\n\r\n", "Yes, we have already established here https://github.com/huggingface/datasets/issues/4883#issuecomment-1232063891 that when one iterates over the whole dataset multiple times, it consumes a bit more memory in the next few repetitions and then remains steady. \r\n\r\nWhich means that when a new iterator is created over the same dataset, all the memory from the previous iterator is re-used.\r\n\r\nSo the leak happens primarily when the iterator is \"drained\" the first time. which tells me that either a circular reference is created somewhere which only gets released when the iterator is destroyed, or there is some global variable that keeps piling up the memory and doesn't release it in time.\r\n\r\nAlso I noticed some `__del__` methods which won't destroy objects automatically and there is usually a warning against using it https://stackoverflow.com/a/1481512/9201239\r\n\r\nThere are also some `weakref`s in the code which too may lead to leaks or weird problems at times.\r\n", "@stas00 my point was, I'm not convinced @lhoestq last example illustrates the leak, but rather the differences between memory mapping and in memory usage patterns. If you destroy arr, memory map impl goes back to 0 each iteration. The amount of memory that 'looks' like it is leaked in first pass differes quite a bit between memory mapped vs in memory, but the underlying issue likely a circular reference, or reference(s) which were not cleaned up that would impact either case, but likely much more visible with mmap.", "Thank you for clarifying, Ross. \r\n\r\nI think we agree that it's almost certain that the `datasets` iterator traps some inner variable that prevents object freeing, since if we create the iterator multiple times (and drain it) after a few runs no new memory is allocated. We could try to dig in more with `objgraph` - my main concern is if the problem happens somewhere outside of python, (i.e. in pyarrow cpp implementation) in which case it'd be much more difficult to trace. \r\n\r\nI wish there was a way on linux to tell the program to free no longer used memory at will.", "FWIW, I revisted some code I had in the works to use HF datasets w/ timm train & val scripts. There is no leak there across multipe epochs. It uses the defaults. \r\n\r\nIt's worth noting that with imagenet `keep_in_memory=True` isn't even an option because the train arrow file is ~140GB and my local memory is less. The virtual address space reflects mmap (> 150GB) and doesn't increase over epochs that I noticed. I have some perf issues to bring up wrt to the current setup, but that's a separate and lower prio discussion to have elsewhere...", "# Notes \r\n\r\nAfter reading many issues and trying many things here is the summary of my learning\r\n\r\nI'm now using @lhoestq repro case as it's pyarrow-isolated: https://github.com/huggingface/datasets/issues/4883#issuecomment-1242034985\r\n\r\n\r\n## 1. pyarrow memory backends\r\n\r\nit has 3 backends, I tried them all with the same results\r\n\r\n```\r\npa.set_memory_pool(pa.jemalloc_memory_pool())\r\npa.set_memory_pool(pa.mimalloc_memory_pool())\r\npa.set_memory_pool(pa.system_memory_pool())\r\n```\r\n\r\n## 2. quick release\r\n\r\nThe `jemalloc` backend supports quick release\r\n\r\n```\r\npa.jemalloc_set_decay_ms(0)\r\n```\r\n\r\nit doesn't make any difference in this case\r\n\r\n## 3. actual memory allocations\r\n\r\nthis is a useful tracer for PA memory allocators\r\n```\r\npa.log_memory_allocations(enable=True)\r\n```\r\n\r\nit nicely reports memory allocations and releases when the arrow file is created the first time.\r\n\r\nbut when we then try to do `enumerate(arr)` this logger reports 0 allocations.\r\n\r\nThis summary also reports no allocations when the script run the second time (post file creation):\r\n```\r\nmem_pool = pa.default_memory_pool()\r\nprint(f\"PyArrow mem pool info: {mem_pool.backend_name} backend, {mem_pool.bytes_allocated()} allocated, \"\r\n f\"{mem_pool.max_memory()} max allocated, \")\r\n\r\nprint(f\"PyArrow total allocated bytes: {pa.total_allocated_bytes()}\")\r\n```\r\n\r\nHowever it's easy to see by using `tracemalloc` which only measures python's memory allocations that it's PA that leaks, since `tracemalloc` shows fixed memory\r\n\r\n(this is bolted on top of the original repro script)\r\n\r\n```\r\nimport tracemalloc\r\ntracemalloc.start()\r\n\r\n[...]\r\nfor idx, x in enumerate(arr):\r\n if idx % 10 == 0:\r\n gc.collect()\r\n time.sleep(0.1)\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n mem_use = pa.total_allocated_bytes() - start_use\r\n mem_peak = pool.max_memory() - start_peak_use\r\n\r\n second_size, second_peak = tracemalloc.get_traced_memory()\r\n mem_diff = (second_size - first_size) / 2**20\r\n mem_peak_diff = (second_peak - first_peak) / 2**20\r\n\r\n # pa.jemalloc_memory_pool().release_unused()\r\n # pa.mimalloc_memory_pool().release_unused()\r\n # pa.system_memory_pool().release_unused()\r\n\r\n print(f\"{idx:4d} {mem_after - mem_before:12.4f}MB {mem_diff:12.4f} {mem_peak_diff:12.4f} {memory_mapped_stream.size()/2**20:4.4}MB {mem_use/2**20:4.4}MB {mem_peak/2**20:4.4}MB\")\r\n\r\n```\r\n\r\ngives:\r\n\r\n```\r\n 0 5.4258MB 0.0110 0.0201 195.3MB 0.0MB 0.0MB\r\n 10 25.3672MB 0.0112 0.0202 195.3MB 0.0MB 0.0MB\r\n 20 45.9336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 30 62.4336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 40 83.0586MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 50 103.6836MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 60 124.3086MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 70 140.8086MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 80 161.4336MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n 90 182.0586MB 0.0112 0.0203 195.3MB 0.0MB 0.0MB\r\n```\r\n\r\nthe 3rd and 4th columns are `tracemalloc`'s report.\r\n\r\nthe 5th column is the size of mmaped stream - fixed.\r\n\r\nthe last 2 are the PA's malloc reports - you can see it's totally fixed and 0.\r\n\r\nSo what gives? PA's memory allocator says nothing was allocated and we can see python doesn't allocate any memory either.\r\n\r\nAs someone suggested in one of the PA issues that **IPC/GRPC could be the issue.** Any suggestions on how debug this one?\r\n\r\nThe main issue is that one can't step through with a python debugger as `arr` is an opaque cpp object binded to python.\r\n\r\nPlease see the next comment for a possible answer.\r\n\r\n# ref-count\r\n\r\nI also traced reference counts and they are all fixed using either `sys.getrefcount(x)` or `len(gc.get_referrers(x))`\r\n\r\nso it's not the python object\r\n\r\n# Important related discussions\r\n\r\nhttps://issues.apache.org/jira/browse/ARROW-11007 - looks very similar to our issue\r\nin particular this part of the report:\r\nhttps://issues.apache.org/jira/browse/ARROW-11007?focusedCommentId=17279642&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17279642\r\n", "# There is no leak, just badly communicated linux RSS memory usage stats\r\n\r\nNext, lets revisit @rwightman's suggestion that there is actually no leak.\r\n\r\nAfter all - we are using mmap which **will try to map** the file to RAM as much as it can and then page out if there is no memory. i.e. MMAP is only fast if you have a lot of CPU RAM.\r\n\r\nSo let's do it:\r\n\r\n# Memory mapping OOM test\r\n\r\nWe first quickly start a cgroups-controlled shell which will instantly kill any program that consumes more than 1GB of memory:\r\n\r\n```\r\n$ systemd-run --user --scope -p MemoryHigh=1G -p MemoryMax=1G -p MemorySwapMax=1G --setenv=\"MEMLIMIT=1GB\" bash\r\n```\r\n\r\nLet's check that it indeed does so. Let's change @lhoestq's script to allocate a 10GB arrow file:\r\n\r\n```\r\n$ python -c 'import pyarrow as pa; pa.array([b\"a\" * (2000 * 1024)] * 5000)'\r\nKilled\r\n```\r\noops, that didn't work, as we tried to allocate 10GB when only 1GB is allowed. This is what we want!\r\n\r\nLet's do a sanity check - can we allocate 0.1GB?\r\n```\r\npython -c 'import pyarrow as pa; pa.array([b\"a\" * (2000 * 1024)] * 50)'\r\n```\r\n\r\nYes. So the limited shell does the right thing. It let's allocate `< 1GB` of RSS RAM.\r\n\r\nNext let's go back to @lhoestq's script but with 10GB arrow file.\r\n\r\nwe change his repro script https://github.com/huggingface/datasets/issues/4883#issuecomment-1242034985 to 50x larger file\r\n```\r\n arr = pa.array([b\"a\" * (2000 * 1024)] * 5000) # ~10000MB\r\n```\r\nwe first have to run into a normal unlimited shell so that we don't get killed (as the script allocates 10GB)\r\n\r\nlet's run the script now in the 1GB-limited shell while running a monitor:\r\n\r\n```\r\n$ htop -F python -s M_RESIDENT -u `whoami`\r\n```\r\n\r\nso we have 2 sources of RSS info just in case.\r\n\r\n```\r\n$ python pyar.py\r\n 0 4.3516MB 0.0103 0.0194 9.766e+03MB 0.0MB 0.0MB\r\n 10 24.3008MB 0.0104 0.0195 9.766e+03MB 0.0MB 0.0MB\r\n[...]\r\n4980 9730.3672MB 0.0108 0.0199 9.766e+03MB 0.0MB 0.0MB\r\n4990 9750.9922MB 0.0108 0.0199 9.766e+03MB 0.0MB 0.0MB\r\nPyArrow mem pool info: jemalloc backend, 0 allocated, 0 max allocated,\r\nPyArrow total allocated bytes: 0\r\n```\r\n\r\nBut wait, it reported 10GB RSS both in `htop` and in our log!\r\n\r\nSo that means it never allocated 10GB otherwise it'd have been killed.\r\n\r\n**Which tells us that there is no leak whatsoever** and this is just a really difficult situation where MMAPPED memory is reported as part of RSS which it probably shouldn't. As now we have no way how to measure real memory usage.\r\n\r\nI also attached the script with all the different things I have tried in it, so it should be easy to turn them on/off if you want to reproduce any of my findings.\r\n\r\n[pyar.txt](https://github.com/huggingface/datasets/files/9539430/pyar.txt)\r\n\r\njust rename it to `pyra.py` as gh doesn't let attaching scripts...\r\n\r\n(I have to remember to exit that special mem-limited shell or else I won't be able to do anything serious there.)\r\n\r\n", "The original leak in the multi-modal code is very likely something else. But of course now it'd be very difficult to trace it using mmap.\r\n\r\nI think to debug we have to set `keep_in_memory=True` in `load_from_disk` to load the small dataset in RAM, so there will be no mmap misleading reporting component and then continue searching for another source of a leak.", "To add to what @stas00 found, I'm gonna leave some links to where I believe the confusion came from in pyarrow's APIs, for future reference:\r\n* In the section where they talk about [efficiently writing and reading arrow data](https://arrow.apache.org/docs/dev/python/ipc.html?#efficiently-writing-and-reading-arrow-data), they give an example of how \r\n\r\n> Arrow can directly reference the data mapped from disk and avoid having to allocate its own memory. \r\n\r\nAnd where their example shows 0 RSS memory allocation, the way we used to measure RSS shows 39.6719MB allocated. Here's the script to reproduce:\r\n```\r\nimport psutil\r\nimport os\r\nimport gc\r\nimport pyarrow as pa\r\nimport time\r\nimport sys\r\n\r\n# gc.set_debug(gc.DEBUG_LEAK)\r\n# gc.set_threshold(0,0,0)\r\n\r\n#pa.set_memory_pool(pa.mimalloc_memory_pool())\r\n#pa.set_memory_pool(pa.system_memory_pool())\r\n\r\nimport tracemalloc\r\n\r\n#pa.jemalloc_set_decay_ms(0)\r\n# pa.log_memory_allocations(enable=True)\r\n\r\nBATCH_SIZE = 10000\r\nNUM_BATCHES = 1000\r\nschema = pa.schema([pa.field('nums', pa.int32())])\r\nwith pa.OSFile('bigfile.arrow', 'wb') as sink:\r\n with pa.ipc.new_file(sink, schema) as writer:\r\n for row in range(NUM_BATCHES):\r\n batch = pa.record_batch([pa.array(range(BATCH_SIZE), type=pa.int32())], schema)\r\n writer.write(batch)\r\n\r\nstart_use = pa.total_allocated_bytes()\r\npool = pa.default_memory_pool()\r\nstart_peak_use = pool.max_memory()\r\ntracemalloc.start()\r\nfirst_size, first_peak = tracemalloc.get_traced_memory()\r\nmem_before = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n\r\n\r\n# with pa.OSFile('bigfile.arrow', 'rb') as source:\r\n# loaded_array = pa.ipc.open_file(source).read_all()\r\n\r\nwith pa.memory_map('bigfile.arrow', 'rb') as source:\r\n loaded_array = pa.ipc.open_file(source).read_all()\r\n\r\n\r\nprint(\"LEN:\", len(loaded_array))\r\nprint(\"RSS: {}MB\".format(pa.total_allocated_bytes() >> 20))\r\n\r\ngc.collect()\r\ntime.sleep(0.1)\r\nmem_after = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\nmem_use = pa.total_allocated_bytes() - start_use\r\nmem_peak = pool.max_memory() - start_peak_use\r\nsecond_size, second_peak = tracemalloc.get_traced_memory()\r\nmem_diff = (second_size - first_size) / 2**20\r\nmem_peak_diff = (second_peak - first_peak) / 2**20\r\n\r\nidx = 0\r\nprint(f\"{idx:4d} {mem_after - mem_before:12.4f}MB {mem_diff:12.4f} {mem_peak_diff:12.4f} {mem_use/2**20:4.4}MB {mem_peak/2**20:4.4}MB\")\r\n```\r\ngives:\r\n```\r\n\r\nLEN: 10000000\r\nRSS: 0MB\r\n 0 39.6719MB 0.0132 0.0529 0.0MB 0.0MB\r\n```\r\nWhich again just proves that we uncorrectly measure RSS, in the case of MMAPPED memory\r\n\r\n\r\n* [The recommended way to do memory profiling from Arrow's docs](https://arrow.apache.org/docs/dev/cpp/memory.html#memory-profiling)\r\n", "@lhoestq, I have been working on a detailed article that shows that MMAP doesn't leak and it's mostly ready. I will share when it's ready.\r\n\r\nThe issue is that we still need to be able to debug memory leaks by turning MMAP off.\r\n\r\nBut, once I tried to show the user that using `load_dataset(... keep_in_memory=True)` is the way to debug an actual memory leak - guess I what I discovered? A potential actual leak.\r\n\r\nHere is the repro:\r\n\r\n```\r\n$ cat ds-mmap.py\r\nfrom datasets import load_dataset\r\nimport gc\r\nimport os\r\nimport psutil\r\n\r\nproc = psutil.Process(os.getpid())\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss / 2**20\r\n\r\ndataset = load_dataset(\"wmt19\", 'cs-en', keep_in_memory=True, streaming=False)['train']\r\n\r\nprint(f\"{'idx':>6} {'RSS':>10} {'Δ RSS':>15}\")\r\nstep = 20000\r\nfor i in range(0, 10*step, step):\r\n mem_before = mem_read()\r\n _ = dataset[i:i+step]\r\n mem_after = mem_read()\r\n print(f\"{i:6d} {mem_after:12.4f}MB {mem_after - mem_before:12.4f}MB \")\r\n```\r\n\r\n```\r\npython ds-io.py\r\nReusing dataset wmt19 (/home/stas/.cache/huggingface/datasets/wmt19/cs-en/1.0.0/c3db1bf4240362ed1ef4673b354f468d70aac66d4e67d45f536d493a0840f0d3)\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.66it/s]\r\n idx RSS Δ RSS\r\n 0 1398.4609MB 3.5195MB\r\n 20000 1398.5742MB 0.1133MB\r\n 40000 1398.6016MB 0.0273MB\r\n 60000 1398.6016MB 0.0000MB\r\n 80000 1398.6016MB 0.0000MB\r\n100000 1398.6328MB 0.0312MB\r\n120000 1398.6953MB 0.0625MB\r\n140000 1398.6953MB 0.0000MB\r\n160000 1398.7500MB 0.0547MB\r\n180000 1398.7500MB 0.0000MB\r\n```", "as I suggested on slack perhaps it was due to dataset records length variation, so with your help I wrote another repro with synthetic records which are all identical - which should remove my hypothese from the equation and we should expect 0 incremental growth as we iterate over the datasets. But alas this is not the case. There is a tiny but definite leak-like behavior.\r\n\r\nHere is the new repro:\r\n\r\n```\r\n$ cat ds-synthetic-no-mmap.py\r\nfrom datasets import load_from_disk, Dataset\r\nimport gc\r\nimport sys\r\nimport os\r\nimport psutil\r\n\r\nproc = psutil.Process(os.getpid())\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss / 2**20\r\n\r\nDS_PATH = \"synthetic-ds\"\r\nif not os.path.exists(DS_PATH):\r\n records = 1_000_000\r\n print(\"Creating a synthetic dataset\")\r\n row = dict(foo=[dict(a='a'*500, b='b'*1000)])\r\n ds = Dataset.from_dict({k: [v] * records for k, v in row.items()})\r\n ds.save_to_disk(DS_PATH)\r\n print(\"Done. Please restart the program\")\r\n sys.exit()\r\n\r\ndataset = load_from_disk(DS_PATH, keep_in_memory=True)\r\nprint(f\"Dataset len={len(dataset)}\")\r\n\r\nprint(f\"{'idx':>8} {'RSS':>10} {'Δ RSS':>15}\")\r\nmem_start = 0\r\nstep = 25_000\r\nwarmup_iterations = 4\r\nfor idx, i in enumerate(range(0, len(dataset), step)):\r\n if idx == warmup_iterations: # skip the first few iterations while things get set up\r\n mem_start = mem_read()\r\n mem_before = mem_read()\r\n _ = dataset[i:i+step]\r\n mem_after = mem_read()\r\n print(f\"{i:8d} {mem_after:12.4f}MB {mem_after - mem_before:12.4f}MB\")\r\nmem_end = mem_read()\r\n\r\nprint(f\"Total diff: {mem_end - mem_start:12.4f}MB (after {warmup_iterations} warmup iterations)\")\r\n```\r\n\r\nand the run:\r\n\r\n```\r\n$ python ds-synthetic-no-mmap.py\r\nDataset len=1000000\r\n idx RSS Δ RSS\r\n 0 1601.9258MB 47.9688MB\r\n 25000 1641.6289MB 39.7031MB\r\n 50000 1641.8594MB 0.2305MB\r\n 75000 1642.1289MB 0.2695MB\r\n 100000 1642.1289MB 0.0000MB\r\n 125000 1642.3789MB 0.2500MB\r\n 150000 1642.3789MB 0.0000MB\r\n 175000 1642.6289MB 0.2500MB\r\n 200000 1642.6289MB 0.0000MB\r\n 225000 1642.8789MB 0.2500MB\r\n 250000 1642.8828MB 0.0039MB\r\n 275000 1643.1328MB 0.2500MB\r\n 300000 1643.1328MB 0.0000MB\r\n 325000 1643.3828MB 0.2500MB\r\n 350000 1643.3828MB 0.0000MB\r\n 375000 1643.6328MB 0.2500MB\r\n 400000 1643.6328MB 0.0000MB\r\n 425000 1643.8828MB 0.2500MB\r\n 450000 1643.8828MB 0.0000MB\r\n 475000 1644.1328MB 0.2500MB\r\n 500000 1644.1328MB 0.0000MB\r\n 525000 1644.3828MB 0.2500MB\r\n 550000 1644.3828MB 0.0000MB\r\n 575000 1644.6328MB 0.2500MB\r\n 600000 1644.6328MB 0.0000MB\r\n 625000 1644.8828MB 0.2500MB\r\n 650000 1644.8828MB 0.0000MB\r\n 675000 1645.1328MB 0.2500MB\r\n 700000 1645.1328MB 0.0000MB\r\n 725000 1645.3828MB 0.2500MB\r\n 750000 1645.3828MB 0.0000MB\r\n 775000 1645.6328MB 0.2500MB\r\n 800000 1645.6328MB 0.0000MB\r\n 825000 1645.8828MB 0.2500MB\r\n 850000 1645.8828MB 0.0000MB\r\n 875000 1646.1328MB 0.2500MB\r\n 900000 1646.1328MB 0.0000MB\r\n 925000 1646.3828MB 0.2500MB\r\n 950000 1646.3828MB 0.0000MB\r\n 975000 1646.6328MB 0.2500MB\r\nTotal diff: 4.5039MB (after 4 warmup iterations)\r\n```\r\nso I'm still not sure why we get this.\r\n\r\nAs you can see I started skipping the first few iterations where memory isn't stable yet. As the actual diff is much larger if we count all iterations.\r\n\r\nWhat do you think?", "@stas00 my 2 cents from having looked at a LOT of memory leaks over the years, esp in Python, .3% memory increase over that many iterations of something is difficult to say with certainty it is a leak. \r\n\r\nAlso, just looking at RSS makes it hard to analyze leaks. RSS can stay near constant while you are leaking. RSS is paged in mem, if you have a big leak your RSS might not increase much (leaked mem tends not to get used again so often paged out) while your virtual page allocation could be going through the roof...", "yes, that's true, but unless the leak is big, I'm yet to find another measurement tool.\r\n\r\nTo prove your point here is a very simple IO in a loop program that also reads the same line all over again:\r\n\r\n```\r\n$ cat mmap-no-leak-debug.py\r\nimport gc\r\nimport mmap\r\nimport os\r\nimport psutil\r\nimport sys\r\n\r\nproc = psutil.Process(os.getpid())\r\n\r\nPATH = \"./tmp.txt\"\r\n\r\ndef mem_read():\r\n gc.collect()\r\n return proc.memory_info().rss / 2**20\r\n\r\n# create a large data file with a few long lines\r\nif not os.path.exists(PATH):\r\n with open(PATH, \"w\") as fh:\r\n s = 'a'* 2**27 + \"\\n\" # 128MB\r\n # write ~2GB file\r\n for i in range(16):\r\n fh.write(s)\r\n\r\nprint(f\"{'idx':>4} {'RSS':>10} {'Δ RSS':>12} {'Δ accumulated':>10}\")\r\n\r\ntotal_read = 0\r\ncontent = ''\r\nmem_after = mem_before_acc = mem_after_acc = mem_before = proc.memory_info().rss / 2**20\r\nprint(f\"{0:4d} {mem_after:10.2f}MB {mem_after - 0:10.2f}MB {0:10.2f}MB\")\r\n\r\nmmap_mode = True if \"--mmap\" in sys.argv else False\r\n\r\nwith open(PATH, \"r\") as fh:\r\n\r\n if mmap_mode:\r\n mm = mmap.mmap(fh.fileno(), 0, access=mmap.ACCESS_READ)\r\n\r\n idx = 0\r\n while True:\r\n idx += 1\r\n mem_before = mem_read()\r\n line = mm.readline() if mmap_mode else fh.readline()\r\n if not line:\r\n break\r\n\r\n #total_read += len(line)\r\n\r\n if \"--accumulate\" in sys.argv:\r\n mem_before_acc = mem_read()\r\n content += str(line)\r\n mem_after_acc = mem_read()\r\n\r\n mem_after = mem_read()\r\n\r\n print(f\"{idx:4d} {mem_after:10.2f}MB {mem_after - mem_before:10.2f}MB {mem_after_acc - mem_before_acc:10.2f}MB\")\r\n```\r\n\r\nit has some other instrumentations to do mmap and accumulate data, but let's ignore that for now.\r\n\r\nHere it is running in a simple non-mmap IO:\r\n\r\n```\r\n$ python mmap-no-leak-debug.py\r\n idx RSS Δ RSS Δ accumulated\r\n 0 12.43MB 12.43MB 0.00MB\r\n 1 269.72MB 257.29MB 0.00MB\r\n 2 269.73MB 0.02MB 0.00MB\r\n 3 269.73MB 0.00MB 0.00MB\r\n 4 269.74MB 0.01MB 0.00MB\r\n 5 269.74MB 0.00MB 0.00MB\r\n 6 269.75MB 0.01MB 0.00MB\r\n 7 269.75MB 0.00MB 0.00MB\r\n 8 269.76MB 0.01MB 0.00MB\r\n 9 269.76MB 0.00MB 0.00MB\r\n 10 269.77MB 0.01MB 0.00MB\r\n 11 269.77MB 0.00MB 0.00MB\r\n 12 269.77MB 0.00MB 0.00MB\r\n 13 269.77MB 0.00MB 0.00MB\r\n 14 269.77MB 0.00MB 0.00MB\r\n 15 269.77MB 0.00MB 0.00MB\r\n 16 146.02MB -123.75MB 0.00MB\r\n```\r\n\r\nas you can see even this super-simplistic program that just performs `readline()` slightly increases in RSS over iterations.\r\n\r\nIf you have a better tool for measurement other than RSS, I'm all ears.", "@stas00 if you aren't using memory maps, you should be able to clearly see the increase in the virtual mem for the process as well. Even then, it could still be challenging to determine if it's leak vs fragmentation due to problematic allocation patterns (not uncommon with Python). Using a better mem allocator like tcmalloc via LD_PRELOAD hooks could reduce impact of fragmentation across both Python and c libs. Not sure that plays nice with any allocator that arrow might use itself though. " ]
2022-08-24T08:42:54Z
2022-09-29T16:16:31Z
null
MEMBER
null
null
null
## Describe the bug When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant. ## Steps to reproduce the bug Run and observe the output of this snippet which logs RSS memory. ```python import psutil import os from transformers import BertTokenizer from datasets import load_dataset from torch.utils.data import DataLoader BATCH_SIZE = 32 NUM_TRIES = 10 tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") def transform(x): x.update(tokenizer(x["text"], return_tensors="pt", max_length=64, padding="max_length", truncation=True)) x.pop("text") x.pop("label") return x dataset = load_dataset("imdb", split="train") dataset.set_transform(transform) train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) count = 0 while count < NUM_TRIES: for idx, batch in enumerate(train_loader): mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) print(count, idx, mem_after - mem_before) count += 1 ``` ## Expected results Memory should not increase after initial setup and loading of the dataset ## Actual results Memory continuously increases as can be seen in the log. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 2, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4883/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4883/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4882/comments
https://api.github.com/repos/huggingface/datasets/issues/4882/events
https://github.com/huggingface/datasets/pull/4882
1,348,913,665
PR_kwDODunzps49sRtv
4,882
Fix language tags resource file
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4882). All of your documentation changes will be reflected on that endpoint." ]
2022-08-24T06:06:01Z
2022-08-24T13:58:33Z
2022-08-24T13:58:30Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4882.diff", "html_url": "https://github.com/huggingface/datasets/pull/4882", "merged_at": "2022-08-24T13:58:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4882.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4882" }
This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08). This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See: - #4753
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4882/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4882/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4881/comments
https://api.github.com/repos/huggingface/datasets/issues/4881/events
https://github.com/huggingface/datasets/issues/4881
1,348,495,777
I_kwDODunzps5QYGmh
4,881
Language names and language codes: connecting to a big database (rather than slow enrichment of custom list)
{ "avatar_url": "https://avatars.githubusercontent.com/u/6072524?v=4", "events_url": "https://api.github.com/users/alexis-michaud/events{/privacy}", "followers_url": "https://api.github.com/users/alexis-michaud/followers", "following_url": "https://api.github.com/users/alexis-michaud/following{/other_user}", "gists_url": "https://api.github.com/users/alexis-michaud/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexis-michaud", "id": 6072524, "login": "alexis-michaud", "node_id": "MDQ6VXNlcjYwNzI1MjQ=", "organizations_url": "https://api.github.com/users/alexis-michaud/orgs", "received_events_url": "https://api.github.com/users/alexis-michaud/received_events", "repos_url": "https://api.github.com/users/alexis-michaud/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexis-michaud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexis-michaud/subscriptions", "type": "User", "url": "https://api.github.com/users/alexis-michaud" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface/moon-landing ", "on the Hub side, there is not fine grained validation we just check that `language:` contains an array of lowercase strings between 2 and 3 characters long =)\r\n\r\nand for `language_bcp47:` we just check it's an array of strings.\r\n\r\nThe only page where we have a hardcoded list of languages is https://huggingface.co/languages and I've been thinking of hooking that page on an external database of languages (so any suggestion is super interesting), but it's not used for validation.\r\n\r\nThat being said, in `datasets` this file https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json is not really used no? Or just in the tagging tool? What about just removing it?\r\n\r\nalso cc'ing @lbourdois who's been active and helpful on those subjects in the past!", "PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n\r\ncc @albertvillanova too", "> PS @alexis-michaud is there a DB of language codes you would recommend? That would contain all `ISO 639-1, 639-2 or 639-3 codes` and be kept up to date, and ideally that would be accessible as a Node.js npm package?\r\n> \r\n> cc @albertvillanova too\r\n\r\nMany thanks for your answer! \r\n\r\nThe Glottolog database is kept up to date, and has information on the closest ISO code for each Glottocode. So providing a clean table with equivalences sounds (to me) like something perfectly reasonable to expect from their team. \r\nTo what extent would [pyglottolog](https://github.com/glottolog/pyglottolog) fit the bill / do the job? (API documentation [here](https://pyglottolog.readthedocs.io/en/latest/index.html)) I'm reaching my technical limitations here: I can't assess the distance between what they offer and what the HF team needs. \r\nI have opened an Issue in [their repo](https://github.com/glottolog/glottolog-cldf/issues/13). \r\n\r\nVery interested to see where it goes from there.", "I just tried pyglottolog to generate a file with all the current IDs (first column).\r\n\r\n`glottolog languoids` inside the `glottolog` repository.\r\n\r\n[glottolog-languoids-v4.6-10-g5c66eec874.csv](https://github.com/huggingface/datasets/files/9417456/glottolog-languoids-v4.6-10-g5c66eec874.csv)\r\n\r\n", "Greetings @alexis-michaud and others,\r\nI think perhaps a standards-based approach here would help everyone out both at the technical and social layers of technical innovations. \r\n\r\nLet me say a few things: \r\n1. there are multiple kinds of assets in AI that should have associated language codes. \r\n * AI Training Data sets\r\n * AI models\r\n * AI outputs\r\nThese are all distinct components which should be tagged for the language and encoding methods they operate on or enhance. For example, an AI based cross-language tool from French to English (UK variety) still needs to consider if it is operating on oral language speech or written text. This is where [IANA language sub-tags](https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry) come in and are so important. I link to the official source. If one wants to use middleware such as a python package or npm package to manage strings then please make sure those packages are updating codes as they are being revised. I see that @julien-c mentioned BCP-47. BCP-47 is the current standard for language tagging. Following it will make the resources you create more findable and let future users better understand or expect any biases which may have been introduced in the different AI based products.\r\n2. BCP-47 is a technical read. However, you will notice that it identifies when to use an ISO 639-1, ISO 639-2, or ISO 639-3. code. This is important for interoperability with many systems. If you are using library systems then you should likely just stick with ISO 639-3 codes.\r\n3. If you are going to use Glottolog codes use them after an `-x-` tag in the BCP-47 format to maintain BCP-47 validity. \r\n4. You should source ISO 639-3 codes directly from the [ISO 639-3 registrar](https://iso639-3.sil.org/code_tables/639/data) as these codes are updated annually, usually in February or March. ISO 639-3 codes have multiple classes: `Active`, `Deprecated`, and `Unassigned`. This means that string length checking is not a sufficient strategy for validation.\r\n5. The names of smaller languages often change depending on the language used to describe them. The [ISO639-2 documentation](https://www.loc.gov/standards/iso639-2/php/code_list.php) has a list of language names for languages with smaller populations for languages in which descriptions about these languages are often written. For example, ISO 639-2's documentation contains the names of languages as they are used in French, German, and English. ISO 639-2 rarely is updated as it is now tied to ISO 639-3's evolution and modern systems should just use ISO 639-3, but these additional names of languages in other languages may not appear in the ISO 369-3 tables.\r\n6. Glottolog codes are also updated at least annually. Usually sometime after ISO 639-3 updates.\r\n7. Please, if the material is in a written mode, please indicate which script is used unless the IANA field has a `suppress script` value. Please use the script tag that BCP-47 calls for from [ISO 15924](https://unicode.org/iso15924/iso15924-codes.html). This also updates at least annually. \r\n8. Another great place to look for language names is the [Unicode CLDR database for locales](https://cldr.unicode.org/translation/displaynames/languagelocale-names). These ought to be congruent with ISO 639-3 but, sometimes CLDR has additional references to languages (such as the french name for a language) which is not contained in ISO 639-2 or ISO 639-3.\r\n9. Wikidata for language names is not always a great source of authoritative information. Language names are asymmetrical. Many times they are contrived because there is no actual name for the language in the language referring... e.g. French doesn't have a name for every language in the world, often they say something like: the language of 'x' people. — English does the same. When a language name standard does not have the best name for a language the best way to handle that is to make a change request with the standards registrar. Keeping track of the source list and the version of your source list for your language codes is very important. \r\n10. Finally, It would be a great service to technologist, minority language communities, and linguists if for all resources of the three types mentioned in number 1 above you added a record to [OLAC](http://www.language-archives.org/). — I can help you with that. OLAC is a search interface for language resources.\r\n", "Hi everybody!\r\n\r\nAbout the point:\r\n> also cc'ing @lbourdois who's been active and helpful on those subjects in the past!\r\n\r\nDiscussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: https://github.com/huggingface/hub-docs/issues/193\r\nOnce this system has been redone and satisfies the identified needs, a redesign of the [Languages page](https://huggingface.co/languages) would also be relevant: https://github.com/huggingface/hub-docs/issues/194. \r\nI invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\n\r\nTo return to the present discussion, thank you for the various databases and methodologies you mention. It makes a big difference to have linguists in the loop 🚀.\r\n\r\nI have a couple of questions where I think an expert perspective would be appreciated:\r\n- Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\nFor example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\n- When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\n- On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone \r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\nBased on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n- Are there any databases that take into account all the existing sign languages in the world?\r\nIt would be nice to have them included on the Hub.\r\n\r\n- Is there an international classification of languages?\r\nA bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later. \r\n\r\n- Finally for the CNRS team, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? 👀 And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).", "> I invite you to read them. But as a quick summary, the exchanges were oriented towards the ISO standard (the first HF system was based on it and it is generally the standard indicated in AI/DL papers) by favouring ISO 639-1 if it exists, and fallback to ISO 639-2 or ISO 639-3 if it doesn't. In addition, it is possible to add BCP-47 tags to consider existing varieties/regionalisms within a language (https://huggingface.co/datasets/AmazonScience/massive/discussions/1). If a language does not belong to either of these two standards, then a request should be made to the HF team to add it manually.\r\n\r\nOne comment on this fall back system (which generally follows the BCP-47 process). ISO 639-2 has some codes which refer to a language ambiguously. For example, I believe code `ara` is used for arabic. In some contexts arabic is considered a single language, however, Egyptian Arabic is quite different from Moroccan Arabic, which are both considered separate languages. These ambiguous codes are valid ISO 639-3 codes but they have a special status. They are called `macro codes`. They exist inside the ISO 639-3 standard to provide absolute fallback compatibility between ISO 639-2 and ISO 639-3. However, when considering AI and MT applications with language data, the unforeseen potential applications and the potential for bias using macro codes should be avoided for new applications of language tags to resources. For historical cases where it is not clear what resources were used to create the AI tools or datasets then I understand the use of ambiguous tag uses. So for clarity in language tagging I suggest:\r\n\r\n1. Strictly following BCP-47\r\n2. Whenever possible avoid the use of macro tags in the ISO 639-3 standard. These are BCP-47 valid, but could introduce biases in the application of their use in society. (Generally there are more specific tags available to use in the ISO 639-3 standard.)", "> * Are there any databases that take into account all the existing sign languages in the world?\r\n> It would be nice to have them included on the Hub.\r\n\r\nSign Languages present an interesting case. As I understand the situation. The identification of sign languages has been identified as a component of their endangerment. Some sign languages do exist in ISO 639-3. For further discussion on the issue I refer readers to the following publications: \r\n\r\n* https://doi.org/10.3390/languages7010049\r\n* https://www.academia.edu/35870983/The_ethics_of_of_language_identification_and_ISO_639\r\n\r\nOne way to be BCP-47 compliant and identify a sign language which is not identified in any of the BCP-47 referenced standards is to use the ISO 639-3 code for undetermined language `und` and then apply a custom suffix indicator (as explained in BCP-47) `-x-` and a custom code, such as the ones used in https://doi.org/10.3390/languages7010049", "> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nYes that would be the function of ISO 639-3. It is the reference standard for languages. It includes a code and its name and the status of the code. Many technical metadata standards for file and computer interoperability reference it, many technical library metadata standards reference it. Some linguists use it. Many governments reference it. \r\n\r\nIndexing diseases are different from indexing languages in several ways, one way is that diseases are the impact of a pathogen not the pathogen itself. If we take COVID-19 as an example, there are many varieties of the pathogen but broadly speaking there is only one disease — with many symptoms.\r\n\r\n", ">* When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nWhile these do appear on wikipedia, I don't know of any information system which uses these codes. I do know that glottolog did import ELP data at one time and its database does contain ELP data I'm not sure if Glottolog regularly ingests new versions of ELP data. I suspect that the use of Linguasphere data may be relevant to users of wikidata as a linked data attribute but I haven't heard of any linked data projects using Linguasphere data for analysis or product development. My impression is that it is fairly unused.", "> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n>For example (I'm taking the case of Hebrew but this has happened for other languages) I [tag](https://huggingface.co/models?language=iw&sort=downloads)ged Google models with the \"iw\" tag because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\n\r\nYes. You can parse the IANA file linked to above (it is regularly updated). All deprecated tags are marked as such in that file. The new prefered tag if there is one, is indicated. ISO 639-3 also indicates a code's status but their list is relevant only codes within their domain (ISO 639-3).", "> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\nIs there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n\r\nI would interpret `en-fr` as english as spoken in France. `fr`in this position refers to the geo-political entity not a second language. I see no reason that other linguists should have a different option after having read BCP-47 and understood how it works.\r\n\r\nThe functional goal here is to tag a language resource as being produced by nonnative speakers, while tagging both languages. There are several problems here. The first is that BCP-47 has no way explicit way to do this. One could use the sub code `x-` with a private use code to indicate a second language and infer some meaning as to that language's role. However, there is another problem here which complexifies the situation greatly... how do we know that those english speakers (in France, or from France, or who were native French speakers) were not speaking their third or fourth language rather than their second language. So to conceptualize a sub-tag which indicates the first language of a speech act for speakers in a second (or other) language would need to be carefully crafted. It might then be proposed to the appropriate authorities. For example three sub-tags exist.\r\n\r\nThere are three registered sub-tags out of a BCP-47 allowed 35. These are `x-`, `u-`, and `t-`. `u-` and `t-` are defined in [RFC6067 ](https://www.rfc-editor.org/rfc/rfc6067)and [RFC6497](https://www.rfc-editor.org/rfc/rfc6497) . For more information see the [Unicode CLDR documentation](https://cldr.unicode.org/index/bcp47-extension) where it says: \r\n\r\n\r\n>[IETF BCP 47 ](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t)[Tags for Identifying Languages](http://www.google.com/url?q=http%3A%2F%2Ftools.ietf.org%2Fhtml%2Fbcp47&sa=D&sntz=1&usg=AOvVaw1DoMN1IBGg-JHgECBvdW1t) defines the language identifiers (tags) used on the Internet and in many standards. It has an extension mechanism that allows additional information to be included. The Unicode Consortium is the maintainer of the extension ‘u’ for Locale Extensions, as described in [rfc6067](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6067&sa=D&sntz=1&usg=AOvVaw0gGWi0EjHfy1WId8k8oKAi), and the extension 't' for Transformed Content, as described in [rfc6497](https://www.google.com/url?q=https%3A%2F%2Ftools.ietf.org%2Fhtml%2Frfc6497&sa=D&sntz=1&usg=AOvVaw0w-OUsFX1PtaKYIq31P64I).\r\n>\r\n>The subtags available for use in the 'u' extension provide language tag extensions that provide for additional information needed for identifying locales. The 'u' subtags consist of a set of keys and associated values (types). For example, a locale identifier for British English with numeric collation has the following form: en-GB-u-kn-true\r\n>\r\n>The subtags available for use in the 't' extension provide language tag extensions that provide for additional information needed for identifying transformed content, or a request to transform content in a certain way. For example, the language tag \"ja-Kana-t-it\" can be used as a content tag indicates Japanese Katakana transformed from Italian. It can also be used as a request for a given transformation.\r\n>\r\n>For more details on the valid subtags for these extensions, their syntax, and their meanings, see LDML Section 3.7 [Unicode BCP 47 Extension Data](http://www.google.com/url?q=http%3A%2F%2Fwww.unicode.org%2Freports%2Ftr35%2F%23Locale_Extension_Key_and_Type_Data&sa=D&sntz=1&usg=AOvVaw0lMthb9KbTJtoOd5mvv3Ha).", "Hi @lbourdois ! Many thanks for the detailed information.\r\n\r\n> Discussions on the need to improve the Hub's tagging system (applying to both datasets and models) can be found in the following discussion: [huggingface/hub-docs#193](https://github.com/huggingface/hub-docs/issues/193) \r\nFascinating topic! To me, the following suggestion has a lot of appeal:\r\n\"if consider that it was necessary to create an ISO 639-3 because ISO 639-1 was deficient, it would be to do the reverse and thus convert the tags from ISO 639-1 to ISO 639-2 or 3 (https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes or https://iso639-3.sil.org/code_tables/639/data).\"\r\n\r\nYes, ISO 639-1 is unsuitable because it has so few codes: less than 200. To address linguistic diversity in 'unrestricted mode', a list of all languages is wanted. \r\n\r\nThe idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47). \r\n\r\nRetaining the authors' original tags and language names would be best. \r\n* For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'. \r\n* For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those. \r\n\r\nThus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost. \r\n\r\nAre industry practices so conservative that many people are happy with two-letter codes, and consider ISO 639-3 three-letter codes an unnecessary complication? That would be a pity, since there are so many advantages to using longer lists. (Somewhat like the transition to Unicode: sooo much better!) But maybe that conservative attitude _is_ widespread, and it would then need to be taken into account. In which case, one could consider offering two-letter codes as a search option. Internally, the search engine would look up the corresponding 3-letter codes, and produce the search results accordingly. \r\n\r\nNow to the other questions:\r\n\r\n> * Do you think it's possible to easily handle tags that have been deprecated potentially for decades?\r\n> For example (I'm taking the case of Hebrew but this has happened for other languages) I tagged Google models with the \"iw\" [tag](https://huggingface.co/models?language=iw&sort=downloads) because I based it on what the authors gave in their [paper](https://arxiv.org/pdf/2010.11934.pdf) see table 6 page 12). It turns out that this ISO tag has in fact been deprecated since 1989 in favour of the \"he\" tag. It would therefore be necessary to have a verification that transforms the old tags into the most recent ones.\r\nI guess that the above suggestion takes care of this case. The original tag (in this example, \"iw\") is retained (facilitating cross-reference with the published paper, and respecting the real: the way the dataset was originally tagged). This old tag goes into the `BCP-47` field of the dataset, which can handle quirks & oddities like this one. And a new tag is added in the `ISO 639-3` field: the 3-letter code \"heb\". \r\n\r\n> * When you look up a language on Wikipedia, it usually shows, in addition to the ISO standard, the codes in the Glottolog (which you have already mentioned), [ELP](https://www.endangeredlanguages.com/?hl=en) and [Linguasphere](http://www.linguasphere.info/jr/index.php?l1=home&l2=welcome) databases. Would you have any opinion about these two other databases?\r\n\r\nI'm afraid I never heard about Linguasphere. The [online register for Linguasphere (PDF)](http://www.linguasphere.info/jr/pdf/index/LS_index_n-n.pdf) seems to be from 1999-2000. It seems that the level of interoperability is not very high right now. (By contrast, Glottolog has [pyglottolog](https://github.com/glottolog/pyglottolog) and in my experience contacts flow well.) \r\n\r\nThe Endangered Languages Project is something Google started but initially did not 'push' very strongly, it seems. Just airing an opinion on the public Internet, it seems that the project is now solidly rooted at University of Hawaiʻi at Mānoa. It seems that they do not generate codes of their own. They refer to ISO 639-3 (Ethnologue) as a code authority when applicable, and otherwise provide comments in so many words, such as that language L currently lacks an Ethnologue code of its own (example [here](https://www.endangeredlanguages.com/lang/10624)). \r\n\r\n> * On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n> Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n> Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\nYes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields. \r\n\r\n> * Is there an international classification of languages?\r\n> A bit like the [International Classification of Diseases](https://en.wikipedia.org/wiki/International_Classification_of_Diseases) in medicine, which is established by the WHO and used as a reference throughout the world. The idea would be to have a precise number of languages to which we would then have to assign a unique tag in order to find them later.\r\n\r\nAs I understand, Ethnologue and Glottolog both try to do that, each in its own way. The simile with diseases seems interesting, to some extent: in both cases it's about human classification of phenomena that have complexity (though some diseases are simpler than others, whereas all languages have much complexity, in different ways).\r\n\r\n> * Finally, when can we expect to see all the datasets of [Pangloss](https://pangloss.cnrs.fr/) on HF? eyes And I don't know if you have a way to help to add also the datasets of [CoCoON](https://cocoon.huma-num.fr/exist/crdo/).\r\n\r\nThree concerns: (i) Technical specifications: we have not yet received feedback on the Japhug and Na datasets in HF. There may be technical considerations that we have not yet thought of and that would need to be taken into account before 'bulk upload'. (ii) Would there be a way to automate the process? The way @BenjaminGalliot did it for Japhug and Na, there was a manual component involved, and doing it by hand for all 200 datasets would not be an ideal workflow, given that the metadata are all clearly arranged. (iii) Some datasets are currently under a 'No derivatives' CreativeCommons license. We could go back to the depositors and argue that the 'No derivatives' mention were best omitted (see [here a similar argument about publications](https://creativecommons.org/2020/04/21/academic-publications-under-no-derivatives-licenses-is-misguided/)): again, we'd want to be sure about the way forward before we set the process into motion.\r\n\r\nOur hope would be that some colleagues try out the [OutilsPangloss](https://gitlab.com/lacito/outilspangloss) download tool, assemble datasets from Pangloss/Cocoon as they wish, then deposit them to HF.", "> The idea of letting people use their favourite nomenclature and automatically adding the ISO 639-3 three-letter code as a tag is appealing. Thus all the HF datasets would have three-letter language tags (handy for basic search), alongside the authors' preferred tags and language names (including Glottolog tags as well as ISO 639-{1, 2}, and all other options allowed by BCP-47).\r\n> \r\n> Retaining the authors' original tags and language names would be best.\r\n> \r\n> * For language names: some people favour one name over another and it is important to respect their choice. In the case of Yongning Na: alternative names include 'Mosuo', 'Narua', 'Eastern Naxi'... and the names carry implications: people have been reported to come to blows about the use of the term 'Mosuo'.\r\n> * For language tags: Glottocodes can be more fine-grained than Ethnologue (ISO 639-3), and some colleagues feel strongly about those.\r\n> \r\n> Thus there would be a BCP-47 tag (sounds like a solid technical choice, though not 'passer-by-friendly': requiring some expertise to interpret) **plus** an ISO 639-3 tag that could be grabbed easily, and (last but not least) language names spelled out in full. Searches would be easier. No information would be lost.\r\n\r\n@alexis-michaud raises an excellent point. Language Resource users have varying search habits (or approaches). This includes cases where two or more language names refer to a single language. A search utility/interface needs to be flexible and able to present results from various kinds of input in the search process. This could be like how the terms French/Français/Franzosisch (en/fr/de) are names for the same language or it could be a variety of the following: autoglottonyms (how the speakers of the language refer to their language), or exoglottonyms (how others refer to the language). Additionally, in web based searches I have also needed to implement diacritic sensitive and insensitive logic so that users can type with or without diacritics and not have results unnecessarily excluded. \r\n\r\nDepending on how detailed of a search problem HF seeks to solve. It may be better to off load complex search to search engines like OLAC which aggregate a lot of language resources. — as I mentioned above I can assist with the informatics on creating an OLAC feed.\r\n\r\nAbstracting search logic from actual metadata may prove a useful way to lower the technical debt overhead. Technical tools and library standards use ISO and BCP-47 Standards. So, from a bibliographic metadata perspective this seems to be the way forward with the widest set of use cases. ", "To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https://huggingface.co/spaces/lbourdois/Language-tags-demo. \r\nThe code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up. \r\n\r\nThis application is divided into 3 points:\r\n- The first is to enter a language in natural language to get its code which can then be filled in the YAML file of the README.MD files of the HF datasets or models in order to be referenced and found by everyone.\r\nIn practice, enter the language (e.g: `English`) you are interested in to get its associated tag (e.g: `en`). You can enter several languages by separating them with a comma (e.g `French,English,German`). You will be given priority to the ISO 639-3 code if it exists otherwise the Glottocode or the BCP47 code (for varieties in particular). If none of these codes are available, it links to a page where the user can contact HF to request to add this tag. \r\nIf you enter a BCP47 code, it must be entered as follows: `Language(Territory)`, for example `French(Canada)`. Attention! If you enter a BCP-47 language, it must be entered first, otherwise the plant code will be displayed. I have to fix this problem but I am moving to a new place, I don't have an internet connection when I want and I prefer to push this first version so that you can already test things now and not have to wait days or weeks.\r\nThis point is intended to simulate the user's side of the equation, which wonders which tag he should fill in for his language.\r\n\r\n\r\n- The second is to enter a language code to obtain the name of the language in natural language.\r\nIn practice, enter the tag (ISO 639-1/2/3, Glottolog or BCP-47) you are interested in (e.g: `fra`) to get its associated language (e.g: French). You can enter several languages by separating them with a comma (e.g `fra,eng,deu`). Attention! If you enter a BCP-47 code, it must be entered first, otherwise the plant code will be displayed. Same as the other bug above (it's actually the same one).\r\nThis point is intended to simulate the side of HF that for a given tag must return the correct language.\r\n\r\n\r\n\r\nTo code these two points, I tested two approaches. \r\n\r\n1. The first one (internal DB in the app) consists in querying a database that HF would have locally at their place. To create this database, I merged the ISO 639 database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab) and the Glottolog database (https://glottolog.org/meta/downloads). The result of this merge is visible in the 3rd point of the application qui is an overview of the database.\r\nIn the image below, on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n![image](https://user-images.githubusercontent.com/58078086/188433217-bf7cb606-7af4-40b5-861f-ed662468f6e4.png)\r\n\r\n\r\nFor BCP 47 codes of the type `fr-CA`, I have retrieved the ISO-3166 alpha 1 codes of the territories (https://www.iso.org/iso-3166-country-codes.html).\r\nIn practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https://cldr.unicode.org/translation/displaynames/languagelocale-name-patterns for the pattern that came up.\r\n\r\n\r\n2. The second approach (with langcodes lib in the app) consists in using the Python `langcodes` library (https://github.com/rspeer/langcodes) which offers a lot of features in ready-made functions. It manages for example deprecated codes, the validity of an entered code, gives languages from code in the language of your choice (by default in English, but also autoglottonyms), etc. I invite you to read the README of the library. The only negative point is that it hasn't been updated for 10 months so basing your tag system on an external tool that isn't necessarily up to date can cause problems in the long run. But it is certainly an interesting source.\r\n\r\nFinally, I have added some information on the number of people speaking/reading the language(s) searched (figures provided by langcodes which are based on those given by ISO). This is not relevant for our topic but it could be figures that could be added as information on the https://huggingface.co/languages page. \r\n\r\n\r\n\r\nWhat could be done to improve the app if I have time:\r\n- Write the text for the app's homepage to describe what it does. This could serve as a basis for a documentation that I think will be necessary to add somewhere on the HF website to explain how the language tagging system works.\r\n- Deal with the bug mentioned above\r\n- Integrate ISO 3166-1 alpha 2 territories (https://www.iso.org/obp/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"départements\" for example).\r\n- Add autoglottonyms? (I only handle English language names for the moment)\r\n- For each language indicate to which family it belongs, in practice this could help to make data augmentation, but especially to classify the languages and find them more easily on the page https://huggingface.co/languages.", "Very impressive! Using the prompt 'Japhug' (a language name), the app finds the intended language:\r\n![image](https://user-images.githubusercontent.com/6072524/188441805-3af3a580-951e-4150-b5f9-64e1bde0992b.png)\r\n\r\nA first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: \r\n`sino1245/burm1265/naqi1236/qian1263/rgya1241/core1262/jiar1240` \r\nOne need not look further than the first higher-level grouping, [`jiar1240`](https://glottolog.org/resource/languoid/id/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n\r\nThus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus.\r\nIt might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.", "> on line 1 of the database, we can see that the Glottocode database gives an ISO 639-3 code (column ISO639P3code) but not the ISO 639 database (column 639-3). Do you have an explanation for this phenomenon?\r\n\r\nThat is because the language name 'Aewa' is not found in the Ethnologue (ISO 639-3) export that you are using. [This export in table form](https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab) only has one reference name (`Ref_Name`). For the language at issue, it is not 'Aewa' but ['Awishira'](https://www.ethnologue.com/language/ash).\r\n\r\nBy contrast, the language on line 0 of the database is called 'Abinomn' by both Ethnologue and Glottolog, and accordingly, columns `ISO639P3code` and `639-3` both contain the ISO 639-3 code, `bsa`.\r\n \r\nThe full Ethnologue database records alternate names for each language, and I'd bet that 'Aewa' is recorded among alternate names for the 'Ashiwira' language. I can't check because the full Ethnologue database is paywalled. \r\n![image](https://user-images.githubusercontent.com/6072524/188461409-e8c48036-df9b-4b56-9609-41cb9c3d3c3c.png)\r\n\r\n[Glottolog](https://glottolog.org/resource/languoid/id/abis1238) does provide the corresponding ISO 639-3 code for 'Aewa', `ash`, which is an exact match (it refers to the same variety as Glottolog `abis1238`).\r\nIn this specific case, Glottolog provides all the relevant information. I'd say that Glottolog can be trusted for all the codes they provide, including ISO 639-3 codes: they only include them when the match is good. \r\n\r\nSee previous comment about the cases where there is no exact match between Glottolog and ISO 639-3 (suggested workaround: look at a higher-level grouping to get an ISO 639-3 code).", "I will add these two points to my TODO list.\r\n- Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n- For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of `Japhug` , should it be just `jya`, or `jya-japh1234` or `jya-Japhug`?", "> * Integrate ISO 3166-1 alpha 2 territories (https://www.iso.org/obp/ui#iso:pub:PUB500001:en)? They offer a finer granularity than ISO 3166-1 alpha 1 which is limited to the country level, but they are very administrative (for French, ISO 3166-1 alpha 2 gives us the \"départements\" for example).\r\n\r\nI'm concerned with this sort of exploration. Not because I am against innovation. In fact this is an interesting thought exercise. However, to explore this thought further creates cognitive dissidence between BCP-47 authorized codes and other code sets which are not BP-47 compliant. For that reason, I think adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging. ", "Good job for the application!\r\n\r\n> On the Hub, there is the following dataset where French people speak in English: https://huggingface.co/datasets/Datatang/French_Speaking_English_Speech_Data_by_Mobile_Phone\r\n Is there a database to take this case into account? I have not found any code in the Glottolog database. If based on an IETF BCP-47 standard, I would tend to tag the dataset with \"en-fr\" but would this be something accepted by linguists?\r\n Based on the first post in this thread that there are about 8000 languages, if one considers that a given language can be pronounced by a speaker of the other 7999, that would theoretically make about 64 million BCP-47 language1-language2 codes existing. And even much more if we consider regionalists with language1_regionalism_x-language2_regionalism_y. I guess there is no such database.\r\n\r\n> Yes, you noted the difficulty here: that there are so many possible situations. Eventually, each dataset would required descriptors of its own. @BenjaminGalliot points out that, in addition to specifying the speakers' native languages, the degree of language proficiency would also be relevant. How many years did the speakers spend in which area? Talking which languages? In what chronological order? Etc. The complexity defies encoding. The purpose of language codes is to allow for searches that group resources into sets that make sense. Additional information is very important, but would seem to be a matter for 'comments' fields.\r\n\r\nTo briefly complete what I said on this subject in a private discussion group, there is a lot of (meta)data associated with each element of a corpus (which language level, according to which criteria, knowing that even among native speakers there are differences, some of which may go beyond what seems obvious to us from a linguistic point of view, such as socio-professional category, life history, environment in the broad sense, etc.), which can be placed in ad-hoc columns, or more freely in a comment/note column. And it is the role of the researcher (in this case a linguist, in all likelihood) to do analyses (statistics...) to determine the relevant data, including criteria that may justify separating different languages (in the broad sense), making separate corpora, etc. Putting this information in the language code is in my opinion doing the job in the opposite and wrong direction, as well as bringing other problems, like where to stop in the list of multidimensional criteria to be integrated, so in my opinion, here, the minimum is the best (the important thing is in my opinion to have well-documented data, globally, by sub-corpus or by line)...\r\n\r\n> If you are going to use Glottolog codes use them after an -x- tag in the BCP-47 format to maintain BCP-47 validity.\r\n\r\nYes, for the current corpora, I have written:\r\n\r\n```\r\nlanguage:\r\n- jya\r\n- nru\r\nlanguage_bcp47:\r\n- x-japh1234\r\n- x-yong1288\r\n```\r\n\r\n> * Add autoglottonyms? (I only handle English language names for the moment)\r\n\r\nAutoglossonyms are useful (I use them prior to other glossonyms), but I'm not sure there is an easy way to retrieve them. We can find some of them in the \"Alternative Names\" panel of Glottolog, but even if we have an API to retrieve them easily, their associated language code will often not be the one we are in (hence the need to do several cycles to find one, which might not be the right one...). Maybe this problem needs more investigation...\r\n\r\n> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\nI strongly insist not to add **a** language name after the code, it would restart a spiral of problems, notably the choice of the language in question:\r\n* the autoglossonym: in my opinion the best choice, but you have to know it…\r\n* the English name: iniquitous,\r\n* the name in the administratively/politically dominant language of the target language if it is relevant (strictly localized without overlapping, for example): iniquitous and tendentious (and in a way a special case of the previous one)...\r\n* etc.\r\n", "> To get a visual idea of these first exchanges, I coded a Streamlit app that I put online on Spaces: https://huggingface.co/spaces/lbourdois/Language-tags-demo.\r\n> The code is in Python so I don't know if it can be used by HF who seems to need something in Node.js but it serves as a proof of concept. The advantage is also that you can directly test ideas by enter things in a search bar and see what comes up.\r\n\r\nThis is really great. You're doing a fantastic job. I love watching the creative process evolve. It is exciting. Let me provide some links to some search interfaces for further inspiration. I always find it helpful to know how others have approached a problem when figuring out my approach. I will link to three examples Glottolog, r12a's language sub-tag chooser, and the FLEx project builder wizard. The first two are online, but the last one is in an application which must be downloaded and works only on windows or linux. I have placed some notes on each of the screenshots.\r\n\r\n* **[Glottolog](https://glottolog.org/)** | [Search Query](https://glottolog.org/glottolog?name=en&namequerytype=part&multilingual=on#2/20.9/150.0) \r\n\r\n![Glottolog1](https://user-images.githubusercontent.com/40230/188494425-84ee6ecf-6868-4684-a4ae-008973f3b367.png)\r\n![Glottolog2](https://user-images.githubusercontent.com/40230/188494426-fc1c225c-f99a-46b5-a1aa-950cf7912ce3.png)\r\n\r\n\r\n* **[r12a language sub-tag chooser](https://r12a.github.io/app-subtags/)** | [Code on github](https://github.com/r12a/app-subtags)\r\n\r\n![r12a1](https://user-images.githubusercontent.com/40230/188495349-8e53be68-8433-46ff-a0c7-c2f6e25458b6.png)\r\n\r\n\r\n* **FLEx Language Chooser** | [application page](https://software.sil.org/fieldworks/)\r\n![FLEx1](https://user-images.githubusercontent.com/40230/188499742-82c5601e-7e37-4863-bd63-8bff8c0694e3.png)\r\n\r\n", "> In practice, what I do is if we enter `fr-CA` is that the letters before the `-` refer to a language in the `Name` column for a `639-1` == `fr` (`639-3` for `fra` or `fre`) in the base of my image above. Then I look at the letters after the `-` which refers to a territory. It comes out `French (Canada)`. I used https://cldr.unicode.org/translation/displaynames/languagelocale-name-patterns for the pattern that came up.\r\n\r\nWhat you are doing is looking at the algorithm for Locale generation rather than BCP-47's original documentation. I'm not sure there are difference, there might be. I know that locale IDs generally follow BCP-47 But I think there are some differences such as the use of `_` vs. `-`. ", "> A first question: based on the Glottocode, would it be possible to grab the closest ISO639-3 code? In case there is no match for the exact language variety, one needs to explore the higher-level groupings, level by level. For this language (Japhug), the information provided in the extracted CSV file (`glottolog-languoids-v4.6.csv`) is: `sino1245/burm1265/naqi1236/qian1263/rgya1241/core1262/jiar1240` One need not look further than the first higher-level grouping, [`jiar1240`](https://glottolog.org/resource/languoid/id/jiar1240), to get an ISO639-3 code, namely `jya`.\r\n> \r\n> Thus users searching by language names would get ISO639-3 (often less fine-grained than Glottolog) as a bonus. It might be possible to ask the Glottolog team to provide this piece of information as part of an export from their database.\r\n\r\nThis is logical, but the fine grained assertions are not the same. That is just because they are in a hierarchical structure today doesn't mean they will be tomorrow. In some cases the glottolog is clearly referring to sub-language variants which will never receive full language status, whereas in other cases glottolog is referencing to unequal entities one or more of which should be a language. Many of the codes in glottolog have no associated documentation indicating what sort of speech variety they are. ", "@lbourdois \r\n> * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n\r\nI'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?", "> For the point of adding the closest ISO 639-3 code for a Glottolog code, what convention should be adopted for the output? Just the ISO 639-3 code, or the ISO 639-3 code - Glottolog code, or the ISO 639-3 code - language name?\r\nTo use the example of Japhug , should it be just jya, or jya-japh1234 or jya-Japhug?\r\n\r\n(answer edited in view of [Benjamin Galliot's comment](https://github.com/huggingface/datasets/issues/4881#issuecomment-1237420600) \r\nEasy part of the answer first: jya-Japhug is out, because, as @BenjaminGalliot pointed out above, mixing language names with language codes will make trouble. For Japhug, `jya-Japhug` looks rather good: the pair looks nice, the one (`jya`) packed together, the other (`Japhug`) good and complete while still pretty compact. But think about languages like 'Yongning Na' or 'Yucatán Maya': a code with a space in the middle, like `nru-Yongning Na`, is really unsightly and unwieldy, not?\r\n\r\nSome [principles for language naming in English](http://hdl.handle.net/10125/24725) have been put forward, with some linguistic arguments, but always supposing that such standardization is desirable, actual standardization of language names in English may well never happen.\r\n\r\nAs for `jya-japh1234`: again, at first sight it seems cute, combining two fierce competitors (Ethnologue and Glottolog) into something that gets the best of both worlds. \r\nBut @HughP has a point: _adding additional codes is a waste of time both for HF devs and for future users who get a confusing idea about language tagging_ Strong wording, for an important comment: better stick with BCP 47. \r\n\r\nSo the solution pointed out by Benjamin, from Frances Gillis-Webber and Sabine Tittel, looks attractive: \r\njya-x-japh1234\r\n\r\nOn the other hand, if the idea for HF Datasets is simply to add the closest ISO 639-3 code for a Glottolog code, maybe it could be provided simply in three letters: providing the 'raw' ISO 639-3 code `jya`. Availability of 'straight' ISO 639-3 codes could save trouble for some users, and those who want more detail could look at the rest of the metadata and general information associated with datasets.", "The problem seems to have already been raised here: https://drops.dagstuhl.de/opus/volltexte/2019/10368/pdf/OASIcs-LDK-2019-4.pdf\r\n\r\nAn example can be seen here :\r\n\r\n> 3.1.2 The use of privateuse sub-tag\r\nIn light of unambiguous language codes being available for the two Khoisan varieties, we\r\npropose to combine the ISO 639-3 code for the parent language N‖ng, i.e., ‘ngh’, with the\r\nprivateuse sub-tag ‘x-’ and the respective Glottocodes stated above.\r\nThe language tags for N|uu and ‖’Au can then be defined accordingly:\r\nN|uu: ngh-x-nuuu1242\r\n‖’Au: ngh-x-auni1243\r\n\r\nBy the way, while searching for this, I came across this application: https://huggingface.co/spaces/cdleong/langcode-search", "> > * Since Glottolog can be trust, I will add a condition to the code that if there is no ISO 639-3 code in the \"official\" database (https://iso639-3.sil.org/sites/iso639-3/files/downloads/iso-639-3.tab), look for it in the \"ISO639P3code\" column of Glottolog.\r\n> \r\n> I'm confused here... if there is no ISO639-3 code in the official database from the registrar, why would you look for an \"unofficial\" code from someone else? What is the use case here?\r\n\r\nHi @HughP, I'm happy to clear what confusion may exist here :innocent: Here is the use case. \r\nGuillaume Jacques (@rgyalrong) put together a sizeable corpus of the Japhug language. It is up on HF Datasets ([here](https://huggingface.co/datasets/Lacito/pangloss/viewer/japh1234)) as well as on Zenodo. \r\n\r\nZenodo is an all-purpose repository without adequate domain-specific metadata (\"[métadonnées métier](https://www.cines.fr/archivage/des-expertises/les-metadonnees/metadonnees-metier/)\"), and the deposits in there are not easy to locate. The Zenodo deposit is intended for a highly specific user case: someone reads about the dataset in a paper, goes to the address on Zenodo and grabs the dataset at one go. \r\n\r\nHF Datasets, on the other hand, allows users to look around among corpora. The Japhug corpus needs proper tagging so that HF Datasets users can find out about it. \r\nJaphug has an entry of its own in Glottolog, whereas it lacks an entry of its own in Ethnologue. Hence the practical usefulness of Glottolog. Ethnologue pools together, under the code `jya`, three different languages (Japhug, Tshobdun `tsho1240` and Zbu `zbua1234`). \r\n\r\nI hope that this helps.", "> By the way, while searching for this, I came across this application: https://huggingface.co/spaces/cdleong/langcode-search\r\n\r\nReally relevant Space, so tagging its author @cdleong, just in case!", "@cdleong A one-stop shop for language codes: terrific!\r\nHow do you feel about the use of Glottocodes? When searching the language names 'Japhug' and 'Yongning Na' (real examples, related to a HF Datasets deposit & various research projects), the relevant Glottocodes are retrieved, and that is great (and not that easy, notably with the space in the middle of 'Yongning Na'). But this positive result is 'hidden' in the results page. Specifically: \r\n\r\n- for Japhug: when searching by language name ('Japhug'), the result in big print is 'Failure', even though there is an available Glottocode (at bottom).\r\n![image](https://user-images.githubusercontent.com/6072524/188604619-a5032f53-6d2c-4751-b83b-bf70a5bf3b22.png)\r\nWhen searching by Glottocode (japh1234), same outcome: 'Result: failure!' (even though this _is_ the right Glottocode\r\nWhen searching by x-japh1234 (Glottocode encapsulated in BCP 47 syntax), one gets the message \r\n\r\n> ''x-japh1234' parses meaningfully as a language tag according to IANA\"\r\n\r\nbut there is paradoxically no link provided to Glottolog: the 'Glottolog' part of the results page is empty\r\n![image](https://user-images.githubusercontent.com/6072524/188605698-91a39982-ae70-4c48-94ae-cceeb06c25f5.png)\r\n\r\n- Yongning Na: the correct code is identified (yong1288) but instead of foregrounding this exact match, the first result that comes up is a completely different language, called 'Yong'. \r\n\r\nTrying to formulate a conclusion (admittedly, this note is not based on intensive testing, it is just feedback on initial contact): from a user perspective, it seems that the tool could make more extensive use of Glottolog. `langcode-search` does a great job querying Glottolog, why not make more extensive use of that information? (including: to arrive at the nearest ISO 639-3 code)" ]
2022-08-23T20:14:24Z
2022-09-14T07:32:30Z
null
NONE
null
null
null
**The problem:** Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial. Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.) Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time? Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues: * progress is likely to be slow: ![image](https://user-images.githubusercontent.com/6072524/186253353-62f42168-3d31-4105-be1c-5eb1f818d528.png) (input required from reviewers, etc.) * the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate. * there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives. **A solution that seems desirable:** Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc. It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes. Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out). In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen. With appreciation of HFT,
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4881/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4881/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4880/comments
https://api.github.com/repos/huggingface/datasets/issues/4880/events
https://github.com/huggingface/datasets/pull/4880
1,348,452,776
PR_kwDODunzps49qyJr
4,880
Added names of less-studied languages
{ "avatar_url": "https://avatars.githubusercontent.com/u/23100612?v=4", "events_url": "https://api.github.com/users/BenjaminGalliot/events{/privacy}", "followers_url": "https://api.github.com/users/BenjaminGalliot/followers", "following_url": "https://api.github.com/users/BenjaminGalliot/following{/other_user}", "gists_url": "https://api.github.com/users/BenjaminGalliot/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BenjaminGalliot", "id": 23100612, "login": "BenjaminGalliot", "node_id": "MDQ6VXNlcjIzMTAwNjEy", "organizations_url": "https://api.github.com/users/BenjaminGalliot/orgs", "received_events_url": "https://api.github.com/users/BenjaminGalliot/received_events", "repos_url": "https://api.github.com/users/BenjaminGalliot/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BenjaminGalliot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BenjaminGalliot/subscriptions", "type": "User", "url": "https://api.github.com/users/BenjaminGalliot" }
[]
closed
false
null
[]
null
[ "OK, I removed Glottolog codes and only added ISO 639-3 ones. The former are for the moment in corpus card description, language details, and in subcorpora names.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4880). All of your documentation changes will be reflected on that endpoint." ]
2022-08-23T19:32:38Z
2022-08-24T12:52:46Z
2022-08-24T12:52:46Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4880.diff", "html_url": "https://github.com/huggingface/datasets/pull/4880", "merged_at": "2022-08-24T12:52:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/4880.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4880" }
Added names of less-studied languages (nru – Narua and jya – Japhug) for existing datasets.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4880/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4880/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4879/comments
https://api.github.com/repos/huggingface/datasets/issues/4879/events
https://github.com/huggingface/datasets/pull/4879
1,348,346,407
PR_kwDODunzps49qbOl
4,879
Fix Citation Information section in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4879). All of your documentation changes will be reflected on that endpoint." ]
2022-08-23T18:06:43Z
2022-09-27T14:04:45Z
2022-08-24T04:09:07Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4879.diff", "html_url": "https://github.com/huggingface/datasets/pull/4879", "merged_at": "2022-08-24T04:09:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4879.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4879" }
Fix Citation Information section in dataset cards: - cc_news - conllpp - datacommons_factcheck - gnad10 - id_panl_bppt - jigsaw_toxicity_pred - kinnews_kirnews - kor_sarcasm - makhzan - reasoning_bg - ro_sts - ro_sts_parallel - sanskrit_classic - telugu_news - thaiqa_squad - wiki_movies This PR partially fixes the Citation Information section in dataset cards. Subsequent PRs will follow to complete this task.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4879/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4879/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4878/comments
https://api.github.com/repos/huggingface/datasets/issues/4878/events
https://github.com/huggingface/datasets/issues/4878
1,348,270,141
I_kwDODunzps5QXPg9
4,878
[not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file`
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "008672", "default": true, "description": "Extra attention is needed", "id": 1935892884, "name": "help wanted", "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted" }, { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
[]
null
[ "Resolved via https://github.com/huggingface/datasets/pull/4937." ]
2022-08-23T17:09:55Z
2022-09-13T14:00:06Z
2022-09-13T14:00:05Z
CONTRIBUTOR
null
null
null
In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon) See https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169 It's used here: https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381 https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362 https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213 We should remove it. Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4878/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4878/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4877/comments
https://api.github.com/repos/huggingface/datasets/issues/4877/events
https://github.com/huggingface/datasets/pull/4877
1,348,246,755
PR_kwDODunzps49qF-w
4,877
Fix documentation card of covid_qa_castorini dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4877). All of your documentation changes will be reflected on that endpoint." ]
2022-08-23T16:52:33Z
2022-08-23T18:05:01Z
2022-08-23T18:05:00Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4877.diff", "html_url": "https://github.com/huggingface/datasets/pull/4877", "merged_at": "2022-08-23T18:05:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/4877.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4877" }
Fix documentation card of covid_qa_castorini dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4877/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4877/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4876/comments
https://api.github.com/repos/huggingface/datasets/issues/4876/events
https://github.com/huggingface/datasets/issues/4876
1,348,202,678
I_kwDODunzps5QW_C2
4,876
Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "also @osanseviero @Pierrci @SBrandeis potentially", "Love this in principle 🚀 \r\n\r\nLet's keep in mind users might rely on `dataset_infos.json` already.\r\n\r\nI'm not convinced by the two-syntax solution, wouldn't it be simpler to have only one syntax with a `default` config for datasets with only one config? ie, always having the `configs` field. This makes parsing the metadata easier IMO.\r\n\r\nMight also be good to wrap the tags under a `datasets_info` tag as follows:\r\n\r\n```yaml\r\ndescription: ...\r\ncitation: ...\r\ndataset_infos:\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n configs:\r\n - ...\r\n[...]\r\n```\r\n\r\nLet's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.", "> Let's keep in mind users might rely on dataset_infos.json already.\r\n\r\nYea we'll full full backward compatibility\r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\nThe main things that may use or ingest these data IMO are:\r\n- users in the UI or IDE\r\n- `datasets` to populate `DatasetInfo` python object\r\n- moon landing which is already parsing YAML\r\n\r\nAm I missing something ? If not I think it's ok to use YAML\r\n\r\n> Might also be good to wrap the tags under a datasets_info tag as follows:\r\n\r\nMaybe one single syntax like this then ?\r\n```yaml\r\ndataset_infos:\r\n- config: unlabeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n- config: labeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 100\r\n features:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: ClassLabel\r\n names:\r\n - negative\r\n - positive\r\n```\r\nand when you have only one config\r\n```yaml\r\ndataset_infos:\r\n- config: default\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n```", "love the idea, and the trend in general to move more things (like tasks) to a single place (YAML).\r\n\r\nalso, if you browse files on a dataset's page (in \"Files and versions\"), raw `README.md` files looks nice and readable, while `.json` files are just one long line that users need to scroll. \r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\ndo users often parse `datasets_infos.json` file themselves? ", "> do users often parse datasets_infos.json file themselves?\r\n\r\nNot AFAIK, but I'm sure there should be a few users.\r\nUsers that access these info via the `DatasetInfo` from `datasets` won't see the change though e.g.\r\n```python\r\n>> from datasets import get_datasets_infos\r\n>>> get_datasets_infos(\"squad\")\r\n{'plain_text': DatasetInfo(description='Stanford Question Answering Dataset...\r\n```", "> Maybe one single syntax like this then ?\r\n\r\nLGTM!\r\n\r\n> The main things that may use or ingest these data IMO are:\r\n> - users in the UI or IDE\r\n> - datasets to populate DatasetInfo python object\r\n> - moon landing which is already parsing YAML\r\n\r\nFair point!\r\n\r\nHaving dataset info in the README's YAML is great for API / `huggingface_hub` consumers as well as it will be inserted in the `cardData` field out of the box 🔥 \r\n", "Very supportive of this!\r\n\r\nNesting an array of configs inside `dataset_infos: ` sounds good to me. One small tweak is that `config: default` can be optional for the default config (which can be the first one by convention)\r\n\r\nWe'll be able to implement metadata validation on the Hub side so we ensure that those metadata are always in the right format (maybe for @coyotte508 ? cc @Pierrci). From a quick glance the `features` might be the harder part to validate here, any doc will be welcome.\r\n\r\n### Other high-level points:\r\n- as we move from mostly academic datasets to *all* datasets (which include the data inside the repos), my intuition is that more and more datasets (Hub-stored) are going to be **single-config**\r\n- similarly, less and less datasets will have a loading script, **just the data + some metadata**\r\n- to lower the barrier to entry to contribution, in the long term users shouldn't need to compute/update this data via a command line. It could be filled automatically on the Hub through a \"bot\" inside Discussions & Pull requests for instance.", "re: `config: default`\r\n\r\nNote also that the default config is not named `default`, afaiu, but create from the repo name, eg: https://huggingface.co/datasets/nbtpj/bionlp2021SAS default config is `nbtpj--bionlp2021SAS` (which is awful)", "> Note also that the default config is not named default, afaiu, but create from the repo name, eg: https://huggingface.co/datasets/nbtpj/bionlp2021SAS default config is nbtpj--bionlp2021SAS (which is awful)\r\n\r\nWe can change this to `default` I think or something else", "> From a quick glance the features might be the harder part to validate here, any doc will be welcome.\r\n\r\nI dug into features validation, see:\r\n\r\n- the OpenAPI spec: https://github.com/huggingface/datasets-server/blob/main/chart/static-files/openapi.json#L460-L697\r\n- the node.js code: https://github.com/huggingface/moon-landing/blob/upgrade-datasets-server-client/server/lib/datasets/FeatureType.ts", "> We can change this to default I think or something else\r\n\r\nI created https://github.com/huggingface/datasets/issues/4902 to discuss that", "> Note also that the default config is not named `default`, afaiu, but create from the repo name\r\n\r\nin case of single-config you can even hide the config name from the UI IMO\r\n\r\n> I dug into features validation, see: the OpenAPI spec\r\n\r\nin moon-landing we use [Joi](https://joi.dev/api/) to validate metadata so we would need to generate from Joi code from the OpenAPI spec (or from somewhere else) but I guess that's doable – or just rewrite it manually, as it won't change often", "I remember there was an ongoing discussion on this topic:\r\n- #3507\r\n\r\nI recall some of the concerns raised on that discussion:\r\n- @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1056997627)\r\n- @severo: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1042779776)\r\n - the metadata header might be very long, before reaching the start of the README/dataset card. \r\n - It also somewhat prevents including large strings like the checksums\r\n - two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file. \r\n- @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157)", "Thanks for bringing these points up !\r\n\r\n> @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: https://github.com/huggingface/datasets/issues/3507#issuecomment-1056997627\r\n\r\nThe TFDS implementation is not super advanced, so it's ok IMO as long as we don't break all the dataset scripts. Note that users can still use `to_tf_dataset`.\r\n\r\nWe had a chance to discuss the two nexts points with @julien-c as well:\r\n\r\n> @severo: https://github.com/huggingface/datasets/issues/3507#issuecomment-1042779776\r\nthe metadata header might be very long, before reaching the start of the README/dataset card.\r\n\r\nIf we don't add the checksums we should be fine. We can also set a maximum number of supported configs in the README to keep it readable.\r\n\r\n> @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157\r\n\r\nI guess the \"HF Hub actions\" could open PRs to do the same in the YAML directly\r\n", "Thanks for linking that similar discussion for context, @albertvillanova!" ]
2022-08-23T16:16:41Z
2022-10-03T09:11:13Z
2022-10-03T09:11:13Z
MEMBER
null
null
null
Currently there are two places to find metadata for datasets: - datasets_infos.json, which contains **per dataset config** - description - citation - license - splits and sizes - checksums of the data files - feature types - and more - YAML tags, which contain - license - language - train-eval-index - and more It would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have. One way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant. Here is an example for SQuAD ```yaml download_size: 35142551 dataset_size: 89789763 version: 1.0.0 splits: - name: train num_examples: 87599 num_bytes: 79317110 - name: validation num_examples: 10570 num_bytes: 10472653 features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers struct: - name: text list: dtype: string - name: answer_start list: dtype: int32 ``` Since there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax ```yaml configs: - config: unlabeled splits: - name: train num_examples: 10000 features: - name: text dtype: string - config: labeled splits: - name: train num_examples: 100 features: - name: text dtype: string - name: label dtype: ClassLabel names: - negative - positive ``` So in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field Alternatively we could keep config specific stuff in the `dataset_infos.json` as it it today Not sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 4, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/4876/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4876/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4875/comments
https://api.github.com/repos/huggingface/datasets/issues/4875/events
https://github.com/huggingface/datasets/issues/4875
1,348,095,686
I_kwDODunzps5QWk7G
4,875
`_resolve_features` ignores the token
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Hi ! Your HF_ENDPOINT seems wrong because of the extra \"/\"\r\n```diff\r\n- os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co/\"\r\n+ os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co\"\r\n```\r\n\r\ncan you try again without the extra \"/\" ?", "Oh, yes, sorry, but it's not the issue.\r\n\r\nIn my code, I set `HF_ENDPOINT=https://hub-ci.huggingface.co`. I added `os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co/\"` afterward just to indicate that we had to have this env var and made a mistake there", "I can't reproduce on my side. I tried using a private dataset repo with a CSV file on hub-ci\r\n\r\nWhat's your version of `huggingface_hub` ?", "I can't reproduce either... Not sure what has occurred, very sorry to have made you lost your time on that ", "I got something similar in https://github.com/huggingface/datasets-server/pull/608. Look how changing the order of the tests (https://github.com/huggingface/datasets-server/pull/608/commits/2c50fe833323de3dfdc76c5cd68639279e0887f8) change the result, which means that something has a side-effect:\r\n- https://github.com/huggingface/datasets-server/actions/runs/3264636253/jobs/5365612918 works\r\n- https://github.com/huggingface/datasets-server/actions/runs/3264651839/jobs/5365654924 does not work\r\n\r\nI still couldn't reproduce it with a simpler script... ", "The issue happens because `extend_module_for_streaming` can't be used several times on packaged builders like `csv` to apply a new auth token. Indeed `extend_module_for_streaming` only applies authentication once, and on subsequent calls does nothing:\r\n\r\nhttps://github.com/huggingface/datasets/blob/07b7c38d9e9c72c74b02524c432ca64d0d3738f4/src/datasets/streaming.py#L62-L64\r\n\r\nThis behavior exists because the authenticatoin wrapper only supports one token. This is an issue for packaged builders which can be used to load several datasets, so it may require several tokens.\r\n\r\nThis can be fixed by storing a dict `token_per_repo_id` instead of `use_auth_token` in the authentication wrapper, and by making it possible to update the authentication wrapper with a new token", "I fixed the datasets-server CI with: https://github.com/huggingface/datasets-server/pull/608\r\n\r\nSee https://github.com/huggingface/datasets-server/actions/runs/3265359326/jobs/5367445018\r\n\r\nThanks for the help @lhoestq !", "> This can be fixed by storing a dict token_per_repo_id instead of use_auth_token in the authentication wrapper, and by making it possible to update the authentication wrapper with a new token\r\n\r\nIf I call the module on the same repo twice: first with authentication, then without authentication, would the second call use authentication anyway? It sounds like a bug: the argument passed to the function would be silently ignored.", "Yes exactly, this is a known bug", "And do you think this bug could be solved as well when fixing this issue?", "yes definitely !" ]
2022-08-23T14:57:36Z
2022-10-17T13:45:47Z
null
CONTRIBUTOR
null
null
null
## Describe the bug When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `load_dataset` before. ## Steps to reproduce the bug ```python import os os.environ["HF_ENDPOINT"] = "https://hub-ci.huggingface.co/" hf_token = "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" from datasets import load_dataset # public dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654226756" config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654226756" split_name = "train" iterable_dataset = load_dataset( dataset_name, name=config_name, split=split_name, streaming=True, use_auth_token=hf_token, ) iterable_dataset = iterable_dataset._resolve_features() print(iterable_dataset.features) # gated dataset_name = "__DUMMY_DATASETS_SERVER_USER__/repo_csv_data-16612654317644" config_name = "__DUMMY_DATASETS_SERVER_USER__--repo_csv_data-16612654317644" split_name = "train" iterable_dataset = load_dataset( dataset_name, name=config_name, split=split_name, streaming=True, use_auth_token=hf_token, ) try: iterable_dataset = iterable_dataset._resolve_features() except FileNotFoundError as e: print("FAILS") ``` ## Expected results I expect to have the same result on a public dataset and on a gated (or private) dataset, if the token has been provided. ## Actual results An exception is thrown on gated datasets. ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.0-1017-aws-x86_64-with-glibc2.35 - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4875/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4875/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/4874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4874/comments
https://api.github.com/repos/huggingface/datasets/issues/4874/events
https://github.com/huggingface/datasets/pull/4874
1,347,618,197
PR_kwDODunzps49n_nI
4,874
[docs] Some tiny doc tweaks
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4874). All of your documentation changes will be reflected on that endpoint." ]
2022-08-23T09:19:40Z
2022-08-24T17:27:57Z
2022-08-24T17:27:56Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4874.diff", "html_url": "https://github.com/huggingface/datasets/pull/4874", "merged_at": "2022-08-24T17:27:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/4874.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4874" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4874/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4874/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4873/comments
https://api.github.com/repos/huggingface/datasets/issues/4873/events
https://github.com/huggingface/datasets/issues/4873
1,347,592,022
I_kwDODunzps5QUp9W
4,873
Multiple dataloader memory error
{ "avatar_url": "https://avatars.githubusercontent.com/u/13767887?v=4", "events_url": "https://api.github.com/users/cyk1337/events{/privacy}", "followers_url": "https://api.github.com/users/cyk1337/followers", "following_url": "https://api.github.com/users/cyk1337/following{/other_user}", "gists_url": "https://api.github.com/users/cyk1337/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyk1337", "id": 13767887, "login": "cyk1337", "node_id": "MDQ6VXNlcjEzNzY3ODg3", "organizations_url": "https://api.github.com/users/cyk1337/orgs", "received_events_url": "https://api.github.com/users/cyk1337/received_events", "repos_url": "https://api.github.com/users/cyk1337/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyk1337/subscriptions", "type": "User", "url": "https://api.github.com/users/cyk1337" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi!\r\n\r\n200+ data loaders is a lot. Have you tried to reduce the number of datasets by concatenating/interleaving the ones with the same structure/task (the API is `{concatenate_datasets/interleave_datasets}([dset1, ..., dset_N])`)?", "Hi @mariosasko, thank you for your reply. I tried pre-concatenating different datasets into one, but one key need is to keep each batch the same data type. Considering that the concatenate-then-segment operation for prefetched samples may span across different data types after concatenating/interleaving (cuz different data sources are mixed), any solution to remain the same data source for each batch?" ]
2022-08-23T08:59:50Z
2022-09-09T03:02:57Z
null
NONE
null
null
null
For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)` It causes the memory error when generating batches. Any solutions to it? ```bash File "/home/xxx/my_code/src/utils/data_utils.py", line 54, in generate_batch x = next(iterator) File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 301, in __iter__ for batch in super().__iter__(): File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch data.append(next(self.dataset_iter)) File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 249, in __iter__ for element in self.dataset: File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 503, in __iter__ for key, example in self._iter(): File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 500, in _iter yield from ex_iterable File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 231, in __iter__ new_key = "_".join(str(key) for key in keys) MemoryError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4873/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4873/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4872/comments
https://api.github.com/repos/huggingface/datasets/issues/4872/events
https://github.com/huggingface/datasets/pull/4872
1,347,180,765
PR_kwDODunzps49mjU9
4,872
Docs for creating an audio dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Awesome thanks ! I think we can also encourage TAR archives as for image dataset scripts (feel free to copy paste some parts from there lol)", "Thanks for all the great feedback @polinaeterna and @lhoestq! 🥰\r\n\r\nI added all the other feedback, and I'll look into the `librivox-indonesia` script now!", "If you don't mind, I'm taking over this PR since we'll do a release pretty soon", "@lhoestq no, I do :D ", "haha sorry ^^" ]
2022-08-23T01:07:09Z
2022-09-22T17:19:13Z
2022-09-21T10:27:04Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4872.diff", "html_url": "https://github.com/huggingface/datasets/pull/4872", "merged_at": "2022-09-21T10:27:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4872.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4872" }
This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. 🙂
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4872/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4872/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4871/comments
https://api.github.com/repos/huggingface/datasets/issues/4871/events
https://github.com/huggingface/datasets/pull/4871
1,346,703,568
PR_kwDODunzps49k9Rm
4,871
Fix: wmt datasets - fix CWMT zh subsets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4871). All of your documentation changes will be reflected on that endpoint." ]
2022-08-22T16:42:09Z
2022-08-23T10:00:20Z
2022-08-23T10:00:19Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4871.diff", "html_url": "https://github.com/huggingface/datasets/pull/4871", "merged_at": "2022-08-23T10:00:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/4871.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4871" }
Fix https://github.com/huggingface/datasets/issues/4575 TODO: run `datasets-cli test`: - [x] wmt17 - [x] wmt18 - [x] wmt19
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4871/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4871/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4870/comments
https://api.github.com/repos/huggingface/datasets/issues/4870/events
https://github.com/huggingface/datasets/pull/4870
1,346,160,498
PR_kwDODunzps49jGxD
4,870
audio folder check CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-22T10:15:53Z
2022-11-02T11:54:35Z
2022-08-22T12:19:40Z
CONTRIBUTOR
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/4870.diff", "html_url": "https://github.com/huggingface/datasets/pull/4870", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4870.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4870" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4870/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4870/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4869/comments
https://api.github.com/repos/huggingface/datasets/issues/4869/events
https://github.com/huggingface/datasets/pull/4869
1,345,513,758
PR_kwDODunzps49hBGY
4,869
Fix typos in documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/85993954?v=4", "events_url": "https://api.github.com/users/fl-lo/events{/privacy}", "followers_url": "https://api.github.com/users/fl-lo/followers", "following_url": "https://api.github.com/users/fl-lo/following{/other_user}", "gists_url": "https://api.github.com/users/fl-lo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fl-lo", "id": 85993954, "login": "fl-lo", "node_id": "MDQ6VXNlcjg1OTkzOTU0", "organizations_url": "https://api.github.com/users/fl-lo/orgs", "received_events_url": "https://api.github.com/users/fl-lo/received_events", "repos_url": "https://api.github.com/users/fl-lo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fl-lo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fl-lo/subscriptions", "type": "User", "url": "https://api.github.com/users/fl-lo" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-21T15:10:03Z
2022-08-22T09:25:39Z
2022-08-22T09:09:58Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4869.diff", "html_url": "https://github.com/huggingface/datasets/pull/4869", "merged_at": "2022-08-22T09:09:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/4869.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4869" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4869/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4869/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4868/comments
https://api.github.com/repos/huggingface/datasets/issues/4868/events
https://github.com/huggingface/datasets/pull/4868
1,345,191,322
PR_kwDODunzps49gBk0
4,868
adding mafand to datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dadelani", "id": 23586676, "login": "dadelani", "node_id": "MDQ6VXNlcjIzNTg2Njc2", "organizations_url": "https://api.github.com/users/dadelani/orgs", "received_events_url": "https://api.github.com/users/dadelani/received_events", "repos_url": "https://api.github.com/users/dadelani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "type": "User", "url": "https://api.github.com/users/dadelani" }
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @dadelani, thanks for your awesome contribution!!! :heart: \r\n\r\nHowever, now we are using the Hub to add new datasets, instead of this GitHub repo. \r\n\r\nYou could share this dataset under your Hub organization namespace: [Masakhane NLP](https://huggingface.co/masakhane). This way the dataset will be accessible using:\r\n```python\r\nds = load_dataset(\"masakhane/mafand\")\r\n```\r\n\r\nYou have the procedure documented in our online docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nMoreover, datasets shared on the Hub no longer need the dummy data files.\r\n\r\nPlease, feel free to ping me if you need any further guidance/support.", "thank you for the comment. I have moved it to the Hub https://huggingface.co/datasets/masakhane/mafand", "Great job, @dadelani!!\r\n\r\nPlease, note that in the README.md file, the YAML tags should be preceded and followed by three dashes `---`, so that they are properly parsed. See, e.g.: https://raw.githubusercontent.com/huggingface/datasets/main/templates/README.md", "Also you could replace the line:\r\n```\r\n# Dataset Card for [Needs More Information]\r\n```\r\nwith\r\n```\r\n# Dataset Card for MAFAND-MT\r\n```", "Great, thank you for the feedback. I have fixed both issues." ]
2022-08-20T15:26:14Z
2022-08-22T11:00:50Z
2022-08-22T08:52:23Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4868.diff", "html_url": "https://github.com/huggingface/datasets/pull/4868", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4868.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4868" }
I'm addding the MAFAND dataset by Masakhane based on the paper/repository below: Paper: https://aclanthology.org/2022.naacl-main.223/ Code: https://github.com/masakhane-io/lafand-mt Please, help merge this Everything works except for creating dummy data file
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4868/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4868/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4867/comments
https://api.github.com/repos/huggingface/datasets/issues/4867/events
https://github.com/huggingface/datasets/pull/4867
1,344,982,646
PR_kwDODunzps49fZle
4,867
Complete tags of superglue dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-19T23:44:39Z
2022-08-22T09:14:03Z
2022-08-22T08:58:31Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4867.diff", "html_url": "https://github.com/huggingface/datasets/pull/4867", "merged_at": "2022-08-22T08:58:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/4867.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4867" }
Related to #4479 .
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4867/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4867/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4866/comments
https://api.github.com/repos/huggingface/datasets/issues/4866/events
https://github.com/huggingface/datasets/pull/4866
1,344,809,132
PR_kwDODunzps49e1CP
4,866
amend docstring for dunder
{ "avatar_url": "https://avatars.githubusercontent.com/u/37704298?v=4", "events_url": "https://api.github.com/users/schafsam/events{/privacy}", "followers_url": "https://api.github.com/users/schafsam/followers", "following_url": "https://api.github.com/users/schafsam/following{/other_user}", "gists_url": "https://api.github.com/users/schafsam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/schafsam", "id": 37704298, "login": "schafsam", "node_id": "MDQ6VXNlcjM3NzA0Mjk4", "organizations_url": "https://api.github.com/users/schafsam/orgs", "received_events_url": "https://api.github.com/users/schafsam/received_events", "repos_url": "https://api.github.com/users/schafsam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/schafsam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schafsam/subscriptions", "type": "User", "url": "https://api.github.com/users/schafsam" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4866). All of your documentation changes will be reflected on that endpoint." ]
2022-08-19T19:09:15Z
2022-09-09T16:33:11Z
null
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4866.diff", "html_url": "https://github.com/huggingface/datasets/pull/4866", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4866.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4866" }
display dunder method in docsting with underlines an not bold markdown.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4866/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4866/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4865/comments
https://api.github.com/repos/huggingface/datasets/issues/4865/events
https://github.com/huggingface/datasets/issues/4865
1,344,552,626
I_kwDODunzps5QJD6y
4,865
Dataset Viewer issue for MoritzLaurer/multilingual_nli
{ "avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4", "events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}", "followers_url": "https://api.github.com/users/MoritzLaurer/followers", "following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}", "gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MoritzLaurer", "id": 41862082, "login": "MoritzLaurer", "node_id": "MDQ6VXNlcjQxODYyMDgy", "organizations_url": "https://api.github.com/users/MoritzLaurer/orgs", "received_events_url": "https://api.github.com/users/MoritzLaurer/received_events", "repos_url": "https://api.github.com/users/MoritzLaurer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions", "type": "User", "url": "https://api.github.com/users/MoritzLaurer" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting @MoritzLaurer.\r\n\r\nCurrently, the dataset preview is working properly: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli\r\n\r\nPlease note that when a dataset is modified, it might take some time until the preview is completely updated.\r\n\r\n@severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?", "Thanks for your response. You are right, its now working well. I had waited for 30 min or so and refreshed several times and thought there was some other error. Yeah, a different error message sounds like a good idea to avoid confusion. ", "I'm closing this issue then.", "> @severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?\r\n\r\nYes, it's a known issue, and we're about to ship a better version" ]
2022-08-19T14:55:20Z
2022-08-22T14:47:14Z
2022-08-22T06:13:20Z
NONE
null
null
null
### Link _No response_ ### Description I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli It displays the error: ``` Status code: 400 Exception: Status400Error Message: The dataset does not exist. ``` Weirdly enough the dataviewer works for an earlier version of the same dataset. The only difference is that it is smaller, but I'm not aware of other changes I have made: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli_test Do you know why the dataviewer is not working? ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4865/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4865/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4864/comments
https://api.github.com/repos/huggingface/datasets/issues/4864/events
https://github.com/huggingface/datasets/issues/4864
1,344,410,043
I_kwDODunzps5QIhG7
4,864
Allow pathlib PoxisPath in Dataset.read_json
{ "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cccntu", "id": 31893406, "login": "cccntu", "node_id": "MDQ6VXNlcjMxODkzNDA2", "organizations_url": "https://api.github.com/users/cccntu/orgs", "received_events_url": "https://api.github.com/users/cccntu/received_events", "repos_url": "https://api.github.com/users/cccntu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "type": "User", "url": "https://api.github.com/users/cccntu" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2022-08-19T12:59:17Z
2022-08-19T12:59:17Z
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** ``` from pathlib import Path from datasets import Dataset ds = Dataset.read_json(Path('data.json')) ``` causes an error ``` AttributeError: 'PosixPath' object has no attribute 'decode' ``` **Describe the solution you'd like** It should be able to accept PosixPath and read the json from inside.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4864/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4863/comments
https://api.github.com/repos/huggingface/datasets/issues/4863/events
https://github.com/huggingface/datasets/issues/4863
1,343,737,668
I_kwDODunzps5QF89E
4,863
TFDS wiki_dialog dataset to Huggingface dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/12378820?v=4", "events_url": "https://api.github.com/users/djaym7/events{/privacy}", "followers_url": "https://api.github.com/users/djaym7/followers", "following_url": "https://api.github.com/users/djaym7/following{/other_user}", "gists_url": "https://api.github.com/users/djaym7/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/djaym7", "id": 12378820, "login": "djaym7", "node_id": "MDQ6VXNlcjEyMzc4ODIw", "organizations_url": "https://api.github.com/users/djaym7/orgs", "received_events_url": "https://api.github.com/users/djaym7/received_events", "repos_url": "https://api.github.com/users/djaym7/repos", "site_admin": false, "starred_url": "https://api.github.com/users/djaym7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/djaym7/subscriptions", "type": "User", "url": "https://api.github.com/users/djaym7" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "@albertvillanova any help ? The linked dataset is in beam format which is similar to wikipedia dataset in huggingface that you scripted..", "Nvm, I was able to port it to huggingface datasets, will upload to the hub soon", "https://huggingface.co/datasets/djaym7/wiki_dialog", "Thanks for the addition, @djaym7." ]
2022-08-18T23:06:30Z
2022-08-22T09:41:45Z
2022-08-22T05:18:53Z
NONE
null
null
null
## Adding a Dataset - **Name:** *Wiki_dialog* - **Description: https://github.com/google-research/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A - **Paper: https://arxiv.org/abs/2205.09073 - **Data: https://github.com/google-research/dialog-inpainting - **Motivation:** *Research and Development on biggest corpus of dialog data* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4863/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4863/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4862/comments
https://api.github.com/repos/huggingface/datasets/issues/4862/events
https://github.com/huggingface/datasets/issues/4862
1,343,464,699
I_kwDODunzps5QE6T7
4,862
Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code
{ "avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4", "events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}", "followers_url": "https://api.github.com/users/yana-xuyan/followers", "following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}", "gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yana-xuyan", "id": 38536635, "login": "yana-xuyan", "node_id": "MDQ6VXNlcjM4NTM2NjM1", "organizations_url": "https://api.github.com/users/yana-xuyan/orgs", "received_events_url": "https://api.github.com/users/yana-xuyan/received_events", "repos_url": "https://api.github.com/users/yana-xuyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions", "type": "User", "url": "https://api.github.com/users/yana-xuyan" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "What's more, the downloaded data is actually a folder instead of an excel file.", "Hi hi, instead of using `download_and_extract` function, I only use `download` function: `base_dir = Path(dl_manager.download(urls))`. It turns out that the code works for `datasets==2.2.2`, however, it doesn't work with `datasets==2.4.0`. ", "Hi @yana-xuyan, thanks for reporting.\r\n\r\nIndeed you already found the answer: an Excel file should be just downloaded and not downloaded-and-extracted.\r\n\r\nThe reason why is that if you call also extract, our library will try to infer the compression format (and extract it). And Excel files are viewed as ZIP files and extracted as so (into a directory). This is because the Office Open XML is indeed a zipped file under the hood): https://en.wikipedia.org/wiki/Office_Open_XML\r\n> Office Open XML (also informally known as OOXML) is a **zipped**, XML-based file format\r\n```python\r\nimport zipfile\r\n\r\nzipfile.is_zipfile(\"filename.xlsx\")\r\n```\r\nreturns `True`.", "Hi @albertvillanova, thank you for your reply! Do you have any clue on why the same error still exists with `datasets==2.4.0` even after I don't extract the downloaded file? FYI, if I downgrade to `datasets==2.2.2`, the code works fine.", "I guess this has to do with the cache: you should remove the previously-wrongly generated directory from the cache; otherwise `datasets` tries to re-use it." ]
2022-08-18T18:36:14Z
2022-08-31T09:25:08Z
2022-08-31T09:25:08Z
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug # The dataset function is as follows: from pathlib import Path from typing import Dict, List, Tuple import datasets import pandas as pd _CITATION = """\ """ _DATASETNAME = "jadi_ide" _DESCRIPTION = """\ """ _HOMEPAGE = "" _LICENSE = "Unknown" _URLS = { _DATASETNAME: "https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data/raw/main/Update 16K_Dataset.xlsx", } _SOURCE_VERSION = "1.0.0" class JaDi_Ide(datasets.GeneratorBasedBuilder): SOURCE_VERSION = datasets.Version(_SOURCE_VERSION) BUILDER_CONFIGS = [ NusantaraConfig( name="jadi_ide_source", version=SOURCE_VERSION, description="JaDi-Ide source schema", schema="source", subset_id="jadi_ide", ), ] DEFAULT_CONFIG_NAME = "source" def _info(self) -> datasets.DatasetInfo: if self.config.schema == "source": features = datasets.Features( { "id": datasets.Value("string"), "text": datasets.Value("string"), "label": datasets.Value("string") } ) return datasets.DatasetInfo( description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION, ) def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]: """Returns SplitGenerators.""" # Dataset does not have predetermined split, putting all as TRAIN urls = _URLS[_DATASETNAME] base_dir = Path(dl_manager.download_and_extract(urls)) data_files = {"train": base_dir} return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "filepath": data_files["train"], "split": "train", }, ), ] def _generate_examples(self, filepath: Path, split: str) -> Tuple[int, Dict]: """Yields examples as (key, example) tuples.""" df = pd.read_excel(filepath, engine='openpyxl') df.columns = ["id", "text", "label"] if self.config.schema == "source": for row in df.itertuples(): ex = { "id": str(row.id), "text": row.text, "label": row.label, } yield row.id, ex ``` ## Expected results Expecting to load the dataset smoothly. ## Actual results File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1216, in _prepare_split desc=f"Generating {split_info.name} split", File "/home/xuyan/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/xuyan/.cache/huggingface/modules/datasets_modules/datasets/jadi_ide/7a539f2b6f726defea8fbe36ceda17bae66c370f6d6c418e3a08d760ebef7519/jadi_ide.py", line 107, in _generate_examples df = pd.read_excel(filepath, engine='openpyxl') File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/download/streaming_download_manager.py", line 701, in xpandas_read_excel return pd.read_excel(BytesIO(filepath_or_buffer.read()), **kwargs) AttributeError: 'xPath' object has no attribute 'read' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.4 - PyArrow version: 9.0.0 - Pandas version: 0.25.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4862/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4862/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4861/comments
https://api.github.com/repos/huggingface/datasets/issues/4861/events
https://github.com/huggingface/datasets/issues/4861
1,343,260,220
I_kwDODunzps5QEIY8
4,861
Using disk for memory with the method `from_dict`
{ "avatar_url": "https://avatars.githubusercontent.com/u/44556846?v=4", "events_url": "https://api.github.com/users/HugoLaurencon/events{/privacy}", "followers_url": "https://api.github.com/users/HugoLaurencon/followers", "following_url": "https://api.github.com/users/HugoLaurencon/following{/other_user}", "gists_url": "https://api.github.com/users/HugoLaurencon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HugoLaurencon", "id": 44556846, "login": "HugoLaurencon", "node_id": "MDQ6VXNlcjQ0NTU2ODQ2", "organizations_url": "https://api.github.com/users/HugoLaurencon/orgs", "received_events_url": "https://api.github.com/users/HugoLaurencon/received_events", "repos_url": "https://api.github.com/users/HugoLaurencon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HugoLaurencon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HugoLaurencon/subscriptions", "type": "User", "url": "https://api.github.com/users/HugoLaurencon" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2022-08-18T15:18:18Z
2022-08-18T15:18:18Z
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I start with an empty dataset. In a loop, at each iteration, I create a new dataset with the method `from_dict` (based on some data I load) and I concatenate this new dataset with the one at the previous iteration. After some iterations, I have an OOM error. **Describe the solution you'd like** The method `from_dict` loads the data in RAM. It could be good to add an option to use the disk instead. **Describe alternatives you've considered** To solve the problem, I have to do an intermediate step where I save the new datasets at each iteration with `save_to_disk`. Once it's done, I open them all and concatenate them.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4861/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4861/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4860/comments
https://api.github.com/repos/huggingface/datasets/issues/4860/events
https://github.com/huggingface/datasets/pull/4860
1,342,311,540
PR_kwDODunzps49WjEu
4,860
Add collection3 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/16446994?v=4", "events_url": "https://api.github.com/users/pefimov/events{/privacy}", "followers_url": "https://api.github.com/users/pefimov/followers", "following_url": "https://api.github.com/users/pefimov/following{/other_user}", "gists_url": "https://api.github.com/users/pefimov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pefimov", "id": 16446994, "login": "pefimov", "node_id": "MDQ6VXNlcjE2NDQ2OTk0", "organizations_url": "https://api.github.com/users/pefimov/orgs", "received_events_url": "https://api.github.com/users/pefimov/received_events", "repos_url": "https://api.github.com/users/pefimov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pefimov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pefimov/subscriptions", "type": "User", "url": "https://api.github.com/users/pefimov" }
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
closed
false
null
[]
null
[ "Hi @pefimov. Thanks for you awesome work on this dataset contribution.\r\n\r\nHowever, now we are using the Hub to add new datasets, instead of this GitHub repo. \r\n\r\nYou could share this dataset under the appropriate Hub organization namespace. This way the dataset will be accessible using:\r\n```python\r\nds = load_dataset(\"<org_namespace>/collection3\")\r\n```\r\n\r\nYou have the procedure documented in our online docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Share](https://huggingface.co/docs/datasets/share)\r\n\r\nMoreover, datasets shared on the Hub no longer need the dummy data files.\r\n\r\nPlease, feel free to ping me if you need any further guidance/support. ", "> However, now we are using the Hub to add new datasets, instead of this GitHub repo.\r\n> \r\n> You could share this dataset under the appropriate Hub organization namespace. This way the dataset will be accessible using:\r\n> \r\n> ```python\r\n> ds = load_dataset(\"<org_namespace>/collection3\")\r\n> ```\r\n> \r\nHi @albertvillanova . Thank you for your response.\r\n\r\nI thought that Collection3 is large and important dataset in Russian presented in 2016 but not represented in huggingface.\r\n\r\nAlso I am not related to authors or organisation of dataset", "The current policy of sharing datasets on the Hub instead of in this GitHub repo has no relation with the importance of the dataset: https://huggingface.co/docs/datasets/share#datasets-on-github-legacy \r\n> The distinction between a Hub dataset and a dataset from GitHub only comes from the legacy sharing workflow. It does not involve any ranking, decisioning, or opinion regarding the contents of the dataset itself.\r\n\r\nIt is not required to be an author/owner (or belong to the organization that is owner) of the dataset in order to share it on the Hub (as it was not the case when sharing them on this GitHub repo). \r\n\r\nIt is recommended to share it under an organization namespace that makes sense though. For this specific dataset, do you know of a clear organization under which it could be shared on the Hub? Maybe \"labinform\", or \"Information Research Laboratory\" or \"Lomonosov Moscow State University\"?\r\n\r\nIn cases like this, where the org is not evident, one possibility could be to contact the dataset owners/creators and ask them. According the publication paper, the authors are:\r\n- V.A. Mozharova\r\n- N.V. Loukachevitch\r\n\r\nI think maybe it would be worth contacting them.", "@pefimov I have contacted the authors (and put you in CC).", "Reply from the authors:\r\n> It is better to use name: Research Computing Center of Lomonosov Moscow State University (short name RCC-MSU)\r\n> https://rcc.msu.ru/en", "I have created the corresponding org namespace and dataset empty repository: https://huggingface.co/datasets/RCC-MSU/collection3\r\n\r\n@pefimov feel free to open a PR on the Hub if you are willing to do so: \r\n- Go to the *Community* tab on the repo: https://huggingface.co/datasets/RCC-MSU/collection3/discussions\r\n- And click: *New pull request* button\r\n\r\nDocs: [Pull requests and Discussions](https://huggingface.co/docs/hub/repositories-pull-requests-discussions) on the Hub", "Thanks" ]
2022-08-17T21:31:42Z
2022-08-23T20:02:45Z
2022-08-22T09:08:59Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4860.diff", "html_url": "https://github.com/huggingface/datasets/pull/4860", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4860.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4860" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4860/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4860/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4859/comments
https://api.github.com/repos/huggingface/datasets/issues/4859/events
https://github.com/huggingface/datasets/issues/4859
1,342,231,016
I_kwDODunzps5QANHo
4,859
can't install using conda on Windows 10
{ "avatar_url": "https://avatars.githubusercontent.com/u/22627691?v=4", "events_url": "https://api.github.com/users/xoffey/events{/privacy}", "followers_url": "https://api.github.com/users/xoffey/followers", "following_url": "https://api.github.com/users/xoffey/following{/other_user}", "gists_url": "https://api.github.com/users/xoffey/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xoffey", "id": 22627691, "login": "xoffey", "node_id": "MDQ6VXNlcjIyNjI3Njkx", "organizations_url": "https://api.github.com/users/xoffey/orgs", "received_events_url": "https://api.github.com/users/xoffey/received_events", "repos_url": "https://api.github.com/users/xoffey/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xoffey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xoffey/subscriptions", "type": "User", "url": "https://api.github.com/users/xoffey" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
2022-08-17T19:57:37Z
2022-08-17T19:57:37Z
null
NONE
null
null
null
## Describe the bug I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip. ## Steps to reproduce the bug conda install -c huggingface -c conda-forge datasets ## Expected results Should have indicated successful installation. ## Actual results Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. ... took forever, so I cancelled it with ctrl-c ## Environment info - `datasets` version: 2.4.0 # after installing with pip - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.12 - PyArrow version: 9.0.0 - Pandas version: 1.4.2 - conda version: 4.13.0 conda info active environment : base active env location : G:\anaconda2022 shell level : 1 user config file : C:\Users\michael\.condarc populated config files : C:\Users\michael\.condarc conda version : 4.13.0 conda-build version : 3.21.8 python version : 3.9.12.final.0 virtual packages : __cuda=11.1=0 __win=0=0 __archspec=1=x86_64 base environment : G:\anaconda2022 (writable) conda av data dir : G:\anaconda2022\etc\conda conda av metadata url : None channel URLs : https://conda.anaconda.org/pytorch/win-64 https://conda.anaconda.org/pytorch/noarch https://conda.anaconda.org/huggingface/win-64 https://conda.anaconda.org/huggingface/noarch https://conda.anaconda.org/conda-forge/win-64 https://conda.anaconda.org/conda-forge/noarch https://conda.anaconda.org/anaconda-fusion/win-64 https://conda.anaconda.org/anaconda-fusion/noarch https://repo.anaconda.com/pkgs/main/win-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/win-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/msys2/win-64 https://repo.anaconda.com/pkgs/msys2/noarch package cache : G:\anaconda2022\pkgs C:\Users\michael\.conda\pkgs C:\Users\michael\AppData\Local\conda\conda\pkgs envs directories : G:\anaconda2022\envs C:\Users\michael\.conda\envs C:\Users\michael\AppData\Local\conda\conda\envs platform : win-64 user-agent : conda/4.13.0 requests/2.27.1 CPython/3.9.12 Windows/10 Windows/10.0.19044 administrator : False netrc file : None offline mode : False
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4859/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4859/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4858/comments
https://api.github.com/repos/huggingface/datasets/issues/4858/events
https://github.com/huggingface/datasets/issues/4858
1,340,859,853
I_kwDODunzps5P6-XN
4,858
map() function removes columns when input_columns is not None
{ "avatar_url": "https://avatars.githubusercontent.com/u/16939722?v=4", "events_url": "https://api.github.com/users/pramodith/events{/privacy}", "followers_url": "https://api.github.com/users/pramodith/followers", "following_url": "https://api.github.com/users/pramodith/following{/other_user}", "gists_url": "https://api.github.com/users/pramodith/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pramodith", "id": 16939722, "login": "pramodith", "node_id": "MDQ6VXNlcjE2OTM5NzIy", "organizations_url": "https://api.github.com/users/pramodith/orgs", "received_events_url": "https://api.github.com/users/pramodith/received_events", "repos_url": "https://api.github.com/users/pramodith/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pramodith/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pramodith/subscriptions", "type": "User", "url": "https://api.github.com/users/pramodith" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi! Thanks for reporting! This looks like a bug. I've just opened a PR with the fix.", "Awesome! Thank you. I'll close the issue once the PR gets merged. :-)", "I guess we should reopen after the revert by:\r\n- #5006" ]
2022-08-16T20:42:30Z
2022-09-22T13:55:24Z
2022-09-22T13:55:24Z
NONE
null
null
null
## Describe the bug The map function, removes features from the dataset that are not present in the _input_columns_ list of columns, despite the columns being removed not mentioned in the _remove_columns_ argument. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"a" : [1,2,3],"b" : [0,1,0], "c" : [2,4,5]}) def double(x,y): x = x*2 y = y*2 return {"d" : x, "e" : y} ds.map(double, input_columns=["a","c"]) ``` ## Expected results ``` Dataset({ features: ['a', 'b', 'c', 'd', 'e'], num_rows: 3 }) ``` ## Actual results ``` Dataset({ features: ['a', 'c', 'd', 'e'], num_rows: 3 }) ``` In this specific example feature **b** should not be removed. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: linux (colab) - Python version: 3.7.13 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4858/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4858/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4857/comments
https://api.github.com/repos/huggingface/datasets/issues/4857/events
https://github.com/huggingface/datasets/issues/4857
1,340,397,153
I_kwDODunzps5P5NZh
4,857
No preprocessed wikipedia is working on huggingface/datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/30733039?v=4", "events_url": "https://api.github.com/users/aninrusimha/events{/privacy}", "followers_url": "https://api.github.com/users/aninrusimha/followers", "following_url": "https://api.github.com/users/aninrusimha/following{/other_user}", "gists_url": "https://api.github.com/users/aninrusimha/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aninrusimha", "id": 30733039, "login": "aninrusimha", "node_id": "MDQ6VXNlcjMwNzMzMDM5", "organizations_url": "https://api.github.com/users/aninrusimha/orgs", "received_events_url": "https://api.github.com/users/aninrusimha/received_events", "repos_url": "https://api.github.com/users/aninrusimha/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aninrusimha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aninrusimha/subscriptions", "type": "User", "url": "https://api.github.com/users/aninrusimha" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting @aninrusimha.\r\n\r\nPlease, note that the preprocessed datasets are still available, as described in the dataset card, e.g.: https://huggingface.co/datasets/wikipedia\r\n```python\r\nds = load_dataset(\"wikipedia\", \"20220301.en\")\r\n``` ", "This is working now, but I was getting an error a few days ago when running an existing script. Unfortunately I did not do a proper bug report, but for some reason I was unable to load the dataset due to a request being made to the wikimedia website. However, its working now. Thanks for the reply!" ]
2022-08-16T13:55:33Z
2022-08-17T13:35:08Z
2022-08-17T13:35:08Z
NONE
null
null
null
## Describe the bug 20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface https://huggingface.co/datasets/wikipedia https://dumps.wikimedia.org/enwiki/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4857/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4857/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4856/comments
https://api.github.com/repos/huggingface/datasets/issues/4856/events
https://github.com/huggingface/datasets/issues/4856
1,339,779,957
I_kwDODunzps5P22t1
4,856
file missing when load_dataset with openwebtext on windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/10361976?v=4", "events_url": "https://api.github.com/users/kingstarcraft/events{/privacy}", "followers_url": "https://api.github.com/users/kingstarcraft/followers", "following_url": "https://api.github.com/users/kingstarcraft/following{/other_user}", "gists_url": "https://api.github.com/users/kingstarcraft/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kingstarcraft", "id": 10361976, "login": "kingstarcraft", "node_id": "MDQ6VXNlcjEwMzYxOTc2", "organizations_url": "https://api.github.com/users/kingstarcraft/orgs", "received_events_url": "https://api.github.com/users/kingstarcraft/received_events", "repos_url": "https://api.github.com/users/kingstarcraft/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kingstarcraft/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kingstarcraft/subscriptions", "type": "User", "url": "https://api.github.com/users/kingstarcraft" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "I have tried to extract ```0015896-b1054262f7da52a0518521e29c8e352c.txt``` from ```17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz``` with 7-zip\r\nand put the file into cache_path ```F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa```\r\nthere is still raise the same error and I find the file was removed from cache_path after I run the run_mlm.py with ```python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base```." ]
2022-08-16T04:04:22Z
2022-08-16T09:06:32Z
null
NONE
null
null
null
## Describe the bug 0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz with 7-zip. ## Steps to reproduce the bug ```sh python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base ``` or ```python from datasets import load_dataset load_dataset("openwebtext", None, cache_dir=None, use_auth_token=None) ``` ## Expected results Loading is successful ## Actual results Traceback (most recent call last): File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 704, in download_and_prepare self._download_and_prepare( File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 795, in _download_and_prepare raise OSError( OSError: Cannot find data file. Original error: [Errno 22] Invalid argument: 'F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa/0015896-b1054262f7da52a0518521e29c8e352c.txt' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: windows - Python version: 3.8.5 - PyArrow version: 9.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4856/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4856/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4855/comments
https://api.github.com/repos/huggingface/datasets/issues/4855/events
https://github.com/huggingface/datasets/issues/4855
1,339,699,975
I_kwDODunzps5P2jMH
4,855
Dataset Viewer issue for super_glue
{ "avatar_url": "https://avatars.githubusercontent.com/u/54366859?v=4", "events_url": "https://api.github.com/users/wzsxxa/events{/privacy}", "followers_url": "https://api.github.com/users/wzsxxa/followers", "following_url": "https://api.github.com/users/wzsxxa/following{/other_user}", "gists_url": "https://api.github.com/users/wzsxxa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wzsxxa", "id": 54366859, "login": "wzsxxa", "node_id": "MDQ6VXNlcjU0MzY2ODU5", "organizations_url": "https://api.github.com/users/wzsxxa/orgs", "received_events_url": "https://api.github.com/users/wzsxxa/received_events", "repos_url": "https://api.github.com/users/wzsxxa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wzsxxa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wzsxxa/subscriptions", "type": "User", "url": "https://api.github.com/users/wzsxxa" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
null
[ "Thanks for reporting @wzsxxa.\r\n\r\nHowever the \"super_glue\" dataset is rendered properly by the Dataset preview: https://huggingface.co/datasets/super_glue" ]
2022-08-16T01:34:56Z
2022-08-22T10:08:01Z
2022-08-22T10:07:45Z
NONE
null
null
null
### Link https://huggingface.co/datasets/super_glue ### Description can't view super_glue dataset on the web page ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4855/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4855/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4853/comments
https://api.github.com/repos/huggingface/datasets/issues/4853/events
https://github.com/huggingface/datasets/pull/4853
1,339,456,490
PR_kwDODunzps49NFNL
4,853
Fix bug and checksums in exams dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-15T20:17:57Z
2022-08-16T06:43:57Z
2022-08-16T06:29:06Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4853.diff", "html_url": "https://github.com/huggingface/datasets/pull/4853", "merged_at": "2022-08-16T06:29:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/4853.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4853" }
Fix #4852.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4853/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4853/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4852/comments
https://api.github.com/repos/huggingface/datasets/issues/4852/events
https://github.com/huggingface/datasets/issues/4852
1,339,450,991
I_kwDODunzps5P1mZv
4,852
Bug in multilingual_with_para config of exams dataset and checksums error
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @albertvillanova. Unfortunately I still get this error. Is this because the merge has yet to be released? Is there a way to track the release?", "Hi @thesofakillers, yes you are right: the fix will be available after next release (it was planned for today; Monday at the latest).\r\n\r\nIn the meantime, you can use the version of the `exams` on our main branch by passing `revision` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"exams\", revision=\"main\")\r\n```" ]
2022-08-15T20:14:52Z
2022-09-16T09:50:55Z
2022-08-16T06:29:07Z
MEMBER
null
null
null
## Describe the bug There is a bug for "multilingual_with_para" config in exams dataset: ```python ds = load_dataset("./datasets/exams", split="train") ``` raises: ``` KeyError: 'choices' ``` Moreover, there is a NonMatchingChecksumError: ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/train_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/dev_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_vi_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_vi_with_para.jsonl.tar.gz'] ``` CC: @thesofakillers
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4852/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4852/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4851/comments
https://api.github.com/repos/huggingface/datasets/issues/4851/events
https://github.com/huggingface/datasets/pull/4851
1,339,085,917
PR_kwDODunzps49L6ee
4,851
Fix license tag and Source Data section in billsum dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kashif", "id": 8100, "login": "kashif", "node_id": "MDQ6VXNlcjgxMDA=", "organizations_url": "https://api.github.com/users/kashif/orgs", "received_events_url": "https://api.github.com/users/kashif/received_events", "repos_url": "https://api.github.com/users/kashif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "type": "User", "url": "https://api.github.com/users/kashif" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "thanks @albertvillanova done thank you!" ]
2022-08-15T14:37:00Z
2022-08-22T13:56:24Z
2022-08-22T13:40:59Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4851.diff", "html_url": "https://github.com/huggingface/datasets/pull/4851", "merged_at": "2022-08-22T13:40:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/4851.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4851" }
Fixed the data source and license fields
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4851/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4851/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4850/comments
https://api.github.com/repos/huggingface/datasets/issues/4850/events
https://github.com/huggingface/datasets/pull/4850
1,338,702,306
PR_kwDODunzps49KnZ8
4,850
Fix test of _get_extraction_protocol for TAR files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-15T08:37:58Z
2022-08-15T09:42:56Z
2022-08-15T09:28:46Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4850.diff", "html_url": "https://github.com/huggingface/datasets/pull/4850", "merged_at": "2022-08-15T09:28:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/4850.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4850" }
While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https://github.com/huggingface/datasets/runs/7818845285?check_suite_focus=true ``` XPASS tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol_throws[https://foo.bar/train.tar] ``` This PR: - refactors the test so that it tests the raise of the exceptions instead of xfailing - fixes the test for TAR files: it does not raise an exception, but returns "tar" - fixes some tests wrongly named: exchange `test_streaming_dl_manager_get_extraction_protocol` with `test_streaming_dl_manager_get_extraction_protocol_gg_drive`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4850/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4850/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4849/comments
https://api.github.com/repos/huggingface/datasets/issues/4849/events
https://github.com/huggingface/datasets/pull/4849
1,338,273,900
PR_kwDODunzps49JN8d
4,849
1.18.x
{ "avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4", "events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}", "followers_url": "https://api.github.com/users/Mr-Robot-001/followers", "following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}", "gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mr-Robot-001", "id": 49282718, "login": "Mr-Robot-001", "node_id": "MDQ6VXNlcjQ5MjgyNzE4", "organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs", "received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events", "repos_url": "https://api.github.com/users/Mr-Robot-001/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions", "type": "User", "url": "https://api.github.com/users/Mr-Robot-001" }
[]
closed
false
null
[]
null
[]
2022-08-14T15:09:19Z
2022-08-14T15:10:02Z
2022-08-14T15:10:02Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4849.diff", "html_url": "https://github.com/huggingface/datasets/pull/4849", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4849.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4849" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4849/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4849/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4848/comments
https://api.github.com/repos/huggingface/datasets/issues/4848/events
https://github.com/huggingface/datasets/pull/4848
1,338,271,833
PR_kwDODunzps49JNj_
4,848
a
{ "avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4", "events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}", "followers_url": "https://api.github.com/users/Mr-Robot-001/followers", "following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}", "gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mr-Robot-001", "id": 49282718, "login": "Mr-Robot-001", "node_id": "MDQ6VXNlcjQ5MjgyNzE4", "organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs", "received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events", "repos_url": "https://api.github.com/users/Mr-Robot-001/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions", "type": "User", "url": "https://api.github.com/users/Mr-Robot-001" }
[]
closed
false
null
[]
null
[]
2022-08-14T15:01:16Z
2022-08-14T15:09:59Z
2022-08-14T15:09:59Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4848.diff", "html_url": "https://github.com/huggingface/datasets/pull/4848", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4848.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4848" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4848/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4848/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4847/comments
https://api.github.com/repos/huggingface/datasets/issues/4847/events
https://github.com/huggingface/datasets/pull/4847
1,338,270,636
PR_kwDODunzps49JNWX
4,847
Test win ci
{ "avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4", "events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}", "followers_url": "https://api.github.com/users/Mr-Robot-001/followers", "following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}", "gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mr-Robot-001", "id": 49282718, "login": "Mr-Robot-001", "node_id": "MDQ6VXNlcjQ5MjgyNzE4", "organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs", "received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events", "repos_url": "https://api.github.com/users/Mr-Robot-001/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions", "type": "User", "url": "https://api.github.com/users/Mr-Robot-001" }
[]
closed
false
null
[]
null
[]
2022-08-14T14:57:00Z
2022-08-14T14:57:45Z
2022-08-14T14:57:45Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4847.diff", "html_url": "https://github.com/huggingface/datasets/pull/4847", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4847.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4847" }
aa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4847/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4847/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4846/comments
https://api.github.com/repos/huggingface/datasets/issues/4846/events
https://github.com/huggingface/datasets/pull/4846
1,337,979,897
PR_kwDODunzps49IYSC
4,846
Update documentation card of miam dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4", "events_url": "https://api.github.com/users/PierreColombo/events{/privacy}", "followers_url": "https://api.github.com/users/PierreColombo/followers", "following_url": "https://api.github.com/users/PierreColombo/following{/other_user}", "gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PierreColombo", "id": 22492839, "login": "PierreColombo", "node_id": "MDQ6VXNlcjIyNDkyODM5", "organizations_url": "https://api.github.com/users/PierreColombo/orgs", "received_events_url": "https://api.github.com/users/PierreColombo/received_events", "repos_url": "https://api.github.com/users/PierreColombo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions", "type": "User", "url": "https://api.github.com/users/PierreColombo" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Ahahah :D not sur how i broke something by updating the README :D ", "Thanks for the fix @PierreColombo. \r\n\r\nOnce a README is modified, our CI runs tests on it, requiring additional quality fixes, so that all READMEs are progressively improved and have some minimal tags/sections/information.\r\n\r\nFor this specific README file, the additional quality requirements of the CI are: https://github.com/huggingface/datasets/runs/7819924428?check_suite_focus=true\r\n```\r\nE The following issues were found for the README at `/home/runner/work/datasets/datasets/datasets/miam/README.md`:\r\nE -\tSection `Additional Information` is missing subsection: `Dataset Curators`.\r\nE -\tSection `Additional Information` is missing subsection: `Contributions`.\r\nE -\t`Additional Information` has an extra subsection: `Benchmark Curators`. Skipping further validation checks for this subsection as expected structure is unknown.\r\n```", "Thanks a lot Albert :)))" ]
2022-08-13T14:38:55Z
2022-08-17T00:50:04Z
2022-08-14T10:26:08Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4846.diff", "html_url": "https://github.com/huggingface/datasets/pull/4846", "merged_at": "2022-08-14T10:26:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/4846.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4846" }
Hi ! Paper has been published at EMNLP.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4846/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4846/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4845/comments
https://api.github.com/repos/huggingface/datasets/issues/4845/events
https://github.com/huggingface/datasets/pull/4845
1,337,928,283
PR_kwDODunzps49IOjf
4,845
Mark CI tests as xfail if Hub HTTP error
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-13T10:45:11Z
2022-08-23T04:57:12Z
2022-08-23T04:42:26Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4845.diff", "html_url": "https://github.com/huggingface/datasets/pull/4845", "merged_at": "2022-08-23T04:42:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/4845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4845" }
In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors. This PR: - marks tests as xfailed only if the Hub raises a 500 error for: - test_upstream_hub - makes pytest report the xfailed/xpassed tests. More tests could also be marked if needed. Examples of CI failures due to temporary Hub HTTP errors: - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files - https://github.com/huggingface/datasets/runs/7806855399?check_suite_focus=true `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-16603108028233/commit/main (Request ID: aZeAQ5yLktoGHQYBcJ3zo)` - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_no_token - https://github.com/huggingface/datasets/runs/7840022996?check_suite_focus=true `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://s3.us-east-1.amazonaws.com/lfs-staging.huggingface.co/repos/81/e3/81e3b831fa9bf23190ec041f26ef7ff6d6b71c1a937b8ec1ef1f1f05b508c089/caae596caa179cf45e7c9ac0c6d9a9cb0fe2d305291bfbb2d8b648ae26ed38b6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20220815%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220815T144713Z&X-Amz-Expires=900&X-Amz-Signature=5ddddfe8ef2b0601e80ab41c78a4d77d921942b0d8160bcab40ff894095e6823&X-Amz-SignedHeaders=host&x-id=PutObject` - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private - https://github.com/huggingface/datasets/runs/7835921082?check_suite_focus=true `requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/repos/create (Request ID: gL_1I7i2dii9leBhlZen-) - Internal Error - We're working hard to fix that as soon as possible!` - FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_image_list - https://github.com/huggingface/datasets/runs/7835920900?check_suite_focus=true - This is not 500, but 404: `requests.exceptions.HTTPError: 404 Client Error: Not Found for url: [https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects](https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects/batch)`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4845/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4845/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4844/comments
https://api.github.com/repos/huggingface/datasets/issues/4844/events
https://github.com/huggingface/datasets/pull/4844
1,337,878,249
PR_kwDODunzps49IFLa
4,844
Add 'val' to VALIDATION_KEYWORDS.
{ "avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4", "events_url": "https://api.github.com/users/akt42/events{/privacy}", "followers_url": "https://api.github.com/users/akt42/followers", "following_url": "https://api.github.com/users/akt42/following{/other_user}", "gists_url": "https://api.github.com/users/akt42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akt42", "id": 98386959, "login": "akt42", "node_id": "U_kgDOBd1EDw", "organizations_url": "https://api.github.com/users/akt42/orgs", "received_events_url": "https://api.github.com/users/akt42/received_events", "repos_url": "https://api.github.com/users/akt42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akt42/subscriptions", "type": "User", "url": "https://api.github.com/users/akt42" }
[]
closed
false
null
[]
null
[ "@mariosasko not sure about how the reviewing process works. Maybe you can have a look because we discussed this elsewhere?", "Hi, thanks! \r\n\r\nLet's add one pattern with `val` to this test before merging: \r\nhttps://github.com/huggingface/datasets/blob/b88a656cf94c4ad972154371c83c1af759fde522/tests/test_data_files.py#L598", "_The documentation is not available anymore as the PR was closed or merged._", "@akt42 note that there is some info about splits keywords in the docs: https://huggingface.co/docs/datasets/main/en/repository_structure#split-names-keywords. I agree it's not clear that it applies not only to filenames, but to directories as well.\r\n\r\nI think \"val\" should be now added to the documentation source file here: https://github.com/huggingface/datasets/blob/main/docs/source/repository_structure.mdx?plain=1#L98", "@polinaeterna Thanks for notifying us that there is a list of supported keywords\r\n\r\nI've added \"val\" to that list and a test." ]
2022-08-13T06:49:41Z
2022-08-30T10:17:35Z
2022-08-30T10:14:54Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4844.diff", "html_url": "https://github.com/huggingface/datasets/pull/4844", "merged_at": "2022-08-30T10:14:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/4844.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4844" }
This PR fixes #4839 by adding the word `"val"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `"val"` as well. I think the supported keywords have to be mentioned in the documentation as well, but I couldn't think of a proper place to add that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4844/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4844/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4843/comments
https://api.github.com/repos/huggingface/datasets/issues/4843/events
https://github.com/huggingface/datasets/pull/4843
1,337,668,699
PR_kwDODunzps49HaWT
4,843
Fix typo in streaming docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/flozi00", "id": 47894090, "login": "flozi00", "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "organizations_url": "https://api.github.com/users/flozi00/orgs", "received_events_url": "https://api.github.com/users/flozi00/received_events", "repos_url": "https://api.github.com/users/flozi00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "type": "User", "url": "https://api.github.com/users/flozi00" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-12T20:18:21Z
2022-08-14T11:43:30Z
2022-08-14T11:02:09Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4843.diff", "html_url": "https://github.com/huggingface/datasets/pull/4843", "merged_at": "2022-08-14T11:02:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/4843.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4843" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4843/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4843/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4842/comments
https://api.github.com/repos/huggingface/datasets/issues/4842/events
https://github.com/huggingface/datasets/pull/4842
1,337,527,764
PR_kwDODunzps49G8CC
4,842
Update stackexchange license
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-12T17:39:06Z
2022-08-14T10:43:18Z
2022-08-14T10:28:49Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4842.diff", "html_url": "https://github.com/huggingface/datasets/pull/4842", "merged_at": "2022-08-14T10:28:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/4842.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4842" }
The correct license of the stackexchange subset of the Pile is `cc-by-sa-4.0`, as can for example be seen here: https://stackoverflow.com/help/licensing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4842/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4842/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4841/comments
https://api.github.com/repos/huggingface/datasets/issues/4841/events
https://github.com/huggingface/datasets/pull/4841
1,337,401,243
PR_kwDODunzps49Gf0I
4,841
Update ted_talks_iwslt license to include ND
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-12T16:14:52Z
2022-08-14T11:15:22Z
2022-08-14T11:00:22Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4841.diff", "html_url": "https://github.com/huggingface/datasets/pull/4841", "merged_at": "2022-08-14T11:00:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/4841.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4841" }
Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4841/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4841/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4840/comments
https://api.github.com/repos/huggingface/datasets/issues/4840/events
https://github.com/huggingface/datasets/issues/4840
1,337,342,672
I_kwDODunzps5PtjrQ
4,840
Dataset Viewer issue for darragh/demo_data_raw3
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
open
false
null
[]
null
[ "do you have an idea of why it can occur @huggingface/datasets? The dataset consists of a single parquet file.", "Thanks for reporting @severo.\r\n\r\nI'm not able to reproduce that error. I get instead:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: 'orix/data/ChiSig/唐合乐-9-3.jpg'\r\n```\r\n\r\nWhich pyarrow version are you using? Mine is 6.0.1. ", "OK, I get now your error when not streaming.", "OK!\r\n\r\nIf it's useful, the pyarrow version is 7.0.0:\r\n\r\nhttps://github.com/huggingface/datasets-server/blob/487c39d87998f8d5a35972f1027d6c8e588e622d/services/worker/poetry.lock#L1537-L1543", "Apparently, there is something weird with that Parquet file: its schema is:\r\n```\r\nimages: extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>>\r\n```\r\n\r\nI have forced a right schema:\r\n```python\r\nfrom datasets import Features, Image, load_dataset\r\n\r\nfeatures = Features({\"images\": Image()})\r\nds = datasets.load_dataset(\"parquet\", split=\"train\", data_files=\"train-00000-of-00001.parquet\", features=features)\r\n```\r\nand then recreated a new Parquet file:\r\n```python\r\nds.to_parquet(\"train.parquet\")\r\n```\r\n\r\nNow this Parquet file has the right schema:\r\n```\r\nimages: struct<bytes: binary, path: string>\r\n child 0, bytes: binary\r\n child 1, path: string\r\n```\r\nand can be loaded normally:\r\n```python\r\nIn [26]: ds = load_dataset(\"parquet\", split=\"train\", data_files=\"dataset.parquet\")\r\nn [27]: ds\r\nOut[27]: \r\nDataset({\r\n features: ['images'],\r\n num_rows: 20\r\n})\r\n```" ]
2022-08-12T15:22:58Z
2022-09-08T07:55:44Z
null
CONTRIBUTOR
null
null
null
### Link https://huggingface.co/datasets/darragh/demo_data_raw3 ### Description ``` Exception: ValueError Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent. ``` reported by @NielsRogge ### Owner No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4840/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4840/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4839/comments
https://api.github.com/repos/huggingface/datasets/issues/4839/events
https://github.com/huggingface/datasets/issues/4839
1,337,206,377
I_kwDODunzps5PtCZp
4,839
ImageFolder dataset builder does not read the validation data set if it is named as "val"
{ "avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4", "events_url": "https://api.github.com/users/akt42/events{/privacy}", "followers_url": "https://api.github.com/users/akt42/followers", "following_url": "https://api.github.com/users/akt42/following{/other_user}", "gists_url": "https://api.github.com/users/akt42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akt42", "id": 98386959, "login": "akt42", "node_id": "U_kgDOBd1EDw", "organizations_url": "https://api.github.com/users/akt42/orgs", "received_events_url": "https://api.github.com/users/akt42/received_events", "repos_url": "https://api.github.com/users/akt42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akt42/subscriptions", "type": "User", "url": "https://api.github.com/users/akt42" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4", "events_url": "https://api.github.com/users/akt42/events{/privacy}", "followers_url": "https://api.github.com/users/akt42/followers", "following_url": "https://api.github.com/users/akt42/following{/other_user}", "gists_url": "https://api.github.com/users/akt42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akt42", "id": 98386959, "login": "akt42", "node_id": "U_kgDOBd1EDw", "organizations_url": "https://api.github.com/users/akt42/orgs", "received_events_url": "https://api.github.com/users/akt42/received_events", "repos_url": "https://api.github.com/users/akt42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akt42/subscriptions", "type": "User", "url": "https://api.github.com/users/akt42" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4", "events_url": "https://api.github.com/users/akt42/events{/privacy}", "followers_url": "https://api.github.com/users/akt42/followers", "following_url": "https://api.github.com/users/akt42/following{/other_user}", "gists_url": "https://api.github.com/users/akt42/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akt42", "id": 98386959, "login": "akt42", "node_id": "U_kgDOBd1EDw", "organizations_url": "https://api.github.com/users/akt42/orgs", "received_events_url": "https://api.github.com/users/akt42/received_events", "repos_url": "https://api.github.com/users/akt42/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akt42/subscriptions", "type": "User", "url": "https://api.github.com/users/akt42" } ]
null
[ "#take" ]
2022-08-12T13:26:00Z
2022-08-30T10:14:55Z
2022-08-30T10:14:55Z
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) the following names as the validation data set directory name: `["validation", "valid", "dev"]`. When the validation directory is named as `'val'`, the Data set will not have a validation split. I expected this to be a trivial task but ended up spending a lot of time before knowing that only the above names are supported. Here's a minimal example of `val` not being recognized: ```python import os import numpy as np import cv2 from datasets import load_dataset # creating a dummy data set with the following structure: # ROOT # | -- train # | ---- class_1 # | ---- class_2 # | -- val # | ---- class_1 # | ---- class_2 ROOT = "data" for which in ["train", "val"]: for class_name in ["class_1", "class_2"]: dir_name = os.path.join(ROOT, which, class_name) if not os.path.exists(dir_name): os.makedirs(dir_name) for i in range(10): cv2.imwrite( os.path.join(dir_name, f"{i}.png"), np.random.random((224, 224)) ) # trying to create a data set dataset = load_dataset( "imagefolder", data_dir=ROOT ) >> dataset DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 20 }) }) # ^ note how the dataset only has a 'train' subset ``` **Describe the solution you'd like** The suggestion is to include `"val"` to [that list ](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) as that's a commonly used phrase to name the validation directory. Also, In the documentation, explicitly mention that only such directory names are supported as train/val/test directories to avoid confusion. **Describe alternatives you've considered** In the documentation, explicitly mention that only such directory names are supported as train/val/test directories without adding `val` to the above list. **Additional context** A question asked in the forum: [ Loading an imagenet-style image dataset with train/val directories](https://discuss.huggingface.co/t/loading-an-imagenet-style-image-dataset-with-train-val-directories/21554)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4839/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4839/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4838/comments
https://api.github.com/repos/huggingface/datasets/issues/4838/events
https://github.com/huggingface/datasets/pull/4838
1,337,194,918
PR_kwDODunzps49F08R
4,838
Fix documentation card of adv_glue dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The failing test has nothing to do with this PR:\r\n```\r\nFAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files\r\n```" ]
2022-08-12T13:15:26Z
2022-08-15T10:17:14Z
2022-08-15T10:02:11Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4838.diff", "html_url": "https://github.com/huggingface/datasets/pull/4838", "merged_at": "2022-08-15T10:02:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/4838.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4838" }
Fix documentation card of adv_glue dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4838/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4838/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4837/comments
https://api.github.com/repos/huggingface/datasets/issues/4837/events
https://github.com/huggingface/datasets/pull/4837
1,337,079,723
PR_kwDODunzps49Fb6l
4,837
Add support for CSV metadata files to ImageFolder
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Cool thanks ! Maybe let's include this change after the refactoring from FolderBasedBuilder in #3963 to avoid dealing with too many unpleasant conflicts ?", "@lhoestq I resolved the conflicts (AudioFolder also supports CSV metadata now). Let me know what you think.\r\n", "@lhoestq Thanks for the suggestion! Indeed it makes more sense to use CSV as the default format in the folder-based builders." ]
2022-08-12T11:19:18Z
2022-08-31T12:01:27Z
2022-08-31T11:59:07Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4837.diff", "html_url": "https://github.com/huggingface/datasets/pull/4837", "merged_at": "2022-08-31T11:59:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4837.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4837" }
Fix #4814
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4837/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4836/comments
https://api.github.com/repos/huggingface/datasets/issues/4836/events
https://github.com/huggingface/datasets/issues/4836
1,337,067,632
I_kwDODunzps5Psghw
4,836
Is it possible to pass multiple links to a split in load script?
{ "avatar_url": "https://avatars.githubusercontent.com/u/43045767?v=4", "events_url": "https://api.github.com/users/sadrasabouri/events{/privacy}", "followers_url": "https://api.github.com/users/sadrasabouri/followers", "following_url": "https://api.github.com/users/sadrasabouri/following{/other_user}", "gists_url": "https://api.github.com/users/sadrasabouri/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sadrasabouri", "id": 43045767, "login": "sadrasabouri", "node_id": "MDQ6VXNlcjQzMDQ1NzY3", "organizations_url": "https://api.github.com/users/sadrasabouri/orgs", "received_events_url": "https://api.github.com/users/sadrasabouri/received_events", "repos_url": "https://api.github.com/users/sadrasabouri/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sadrasabouri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadrasabouri/subscriptions", "type": "User", "url": "https://api.github.com/users/sadrasabouri" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2022-08-12T11:06:11Z
2022-08-12T11:06:11Z
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I wanted to use a python loading script in hugging face datasets that use different sources of text (it's somehow a compilation of multiple datasets + my own dataset) based on how `load_dataset` [works](https://huggingface.co/docs/datasets/loading) I assumed I could do something like bellow in my loading script: ```python ... _URL = "MY_DATASET_URL/resolve/main/data/" _URLS = { "train": [ "FIRST_URL_TO.txt", _URL + "train-00000-of-00001-676bfebbc8742592.parquet" ] } ... ``` but when loading the dataset it raises the following error: ```python File ~/.local/lib/python3.8/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 702 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 703 if not downloaded_from_gcs: --> 704 self._download_and_prepare( 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs ... 668 if isinstance(a, str): 669 # Force-cast str subclasses to str (issue #21127) 670 parts.append(str(a)) TypeError: expected str, bytes or os.PathLike object, not list ``` **Describe the solution you'd like** I believe since it's possible for `load_dataset` to get list of URLs instead of just a URL for `train` split it can be possible here too. **Describe alternatives you've considered** An alternative solution would be to download all needed datasets locally and `push_to_hub` them all, but since the datasets I'm talking about are huge it's not among my options. **Additional context** I think loading `text` beside the `parquet` is completely a different issue but I believe I can figure it out by proposing a config for my dataset to load each entry of `_URLS['train']` separately either by `load_dataset("text", ...` or `load_dataset("parquet", ...`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4836/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4836/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4835/comments
https://api.github.com/repos/huggingface/datasets/issues/4835/events
https://github.com/huggingface/datasets/pull/4835
1,336,994,835
PR_kwDODunzps49FJg9
4,835
Fix documentation card of ethos dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-12T09:51:06Z
2022-08-12T13:13:55Z
2022-08-12T12:59:39Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4835.diff", "html_url": "https://github.com/huggingface/datasets/pull/4835", "merged_at": "2022-08-12T12:59:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/4835.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4835" }
Fix documentation card of ethos dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4835/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4835/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4834/comments
https://api.github.com/repos/huggingface/datasets/issues/4834/events
https://github.com/huggingface/datasets/pull/4834
1,336,993,511
PR_kwDODunzps49FJOu
4,834
Fix documentation card of recipe_nlg dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-12T09:49:39Z
2022-08-12T11:28:18Z
2022-08-12T11:13:40Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4834.diff", "html_url": "https://github.com/huggingface/datasets/pull/4834", "merged_at": "2022-08-12T11:13:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/4834.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4834" }
Fix documentation card of recipe_nlg dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4834/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4834/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4833/comments
https://api.github.com/repos/huggingface/datasets/issues/4833/events
https://github.com/huggingface/datasets/pull/4833
1,336,946,965
PR_kwDODunzps49E_Nk
4,833
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-12T09:04:52Z
2022-09-22T14:41:23Z
2022-08-12T09:45:55Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4833.diff", "html_url": "https://github.com/huggingface/datasets/pull/4833", "merged_at": "2022-08-12T09:45:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/4833.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4833" }
Fix missing tags in dataset cards: - boolq - break_data - definite_pronoun_resolution - emo - kor_nli - pg19 - quartz - sciq - squad_es - wmt14 - wmt15 - wmt16 - wmt17 - wmt18 - wmt19 - wmt_t2t This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4833/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4833/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4832/comments
https://api.github.com/repos/huggingface/datasets/issues/4832/events
https://github.com/huggingface/datasets/pull/4832
1,336,727,389
PR_kwDODunzps49EQav
4,832
Fix tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The non-passing tests are caused by other missing information in the dataset cards." ]
2022-08-12T04:11:23Z
2022-08-12T04:41:55Z
2022-08-12T04:27:24Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4832.diff", "html_url": "https://github.com/huggingface/datasets/pull/4832", "merged_at": "2022-08-12T04:27:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4832.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4832" }
Fix wrong tags in dataset cards.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4832/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4832/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4831/comments
https://api.github.com/repos/huggingface/datasets/issues/4831/events
https://github.com/huggingface/datasets/pull/4831
1,336,199,643
PR_kwDODunzps49Cibf
4,831
Add oversampling strategies to interleave datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ylacombe", "id": 52246514, "login": "ylacombe", "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "repos_url": "https://api.github.com/users/ylacombe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "type": "User", "url": "https://api.github.com/users/ylacombe" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4831). All of your documentation changes will be reflected on that endpoint.", "Hi @lhoestq, \r\nThanks for your review! I've added the requested mention in the documentation and corrected the Error type in `interleave_datasets`. \r\nI've also added test cases in `test_arrow_dataset.py`, which was useful since it allow me to detect an error in the case of an oversampling strategy with no sampling probabilities. \r\nCould you double check this part ? I've commented the code to explain the approach.\r\nThanks!\r\n" ]
2022-08-11T16:24:51Z
2022-12-04T11:23:54Z
2022-08-24T16:46:07Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4831.diff", "html_url": "https://github.com/huggingface/datasets/pull/4831", "merged_at": "2022-08-24T16:46:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4831.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4831" }
Hello everyone, Here is a proposal to improve `interleave_datasets` function. Following Issue #3064, and @lhoestq [comment](https://github.com/huggingface/datasets/issues/3064#issuecomment-1022333385), I propose here a code that performs oversampling when interleaving a `Dataset` list. I have myself encountered this problem while trying to implement training on a multilingual dataset following a training strategy similar to that of [XLSUM paper](https://arxiv.org/pdf/2106.13822.pdf), a multilingual abstract summary dataset where the multilingual training dataset is created by sampling from the languages following a smoothing strategy. The main idea is to sample languages that have a low number of samples more frequently than other languages. As in Issue #3064, the current default strategy is a undersampling strategy, which stops as soon as a dataset runs out of samples. The new `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once. How does it work in practice: - if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$. - if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once. - In the other cases, it is supposed to keep the same behaviour as before. Except that this time, when probabilities are precised, it really stops AS SOON AS a dataset is out of samples. More on the last sentence: The previous example of `interleave_datasets` was: >>> from datasets import Dataset, interleave_datasets >>> d1 = Dataset.from_dict({"a": [0, 1, 2]}) >>> d2 = Dataset.from_dict({"a": [10, 11, 12]}) >>> d3 = Dataset.from_dict({"a": [20, 21, 22]}) >>> dataset = interleave_datasets([d1, d2, d3]) >>> dataset["a"] [0, 10, 20, 1, 11, 21, 2, 12, 22] >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42) >>> dataset["a"] [10, 0, 11, 1, 2, 20, 12] With my implementation, `dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)` gives: >>> dataset["a"] [10, 0, 11, 1, 2] because `d1` is already out of samples just after `2` is added. Example of the results of applying the different strategies: >>> from datasets import Dataset, interleave_datasets >>> d1 = Dataset.from_dict({"a": [0, 1, 2]}) >>> d2 = Dataset.from_dict({"a": [10, 11, 12]}) >>> d3 = Dataset.from_dict({"a": [20, 21, 22]}) >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted") >>> dataset["a"] [10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22] >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42) >>> dataset["a"] [10, 0, 11, 1, 2] >>> dataset = interleave_datasets([d1, d2, d3]) >>> dataset["a"] [0, 10, 20, 1, 11, 21, 2, 12, 22] >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") >>> dataset["a"] [0, 10, 20, 1, 11, 21, 2, 12, 22] >>> d1 = Dataset.from_dict({"a": [0, 1, 2]}) >>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]}) >>> d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]}) >>> dataset = interleave_datasets([d1, d2, d3]) >>> dataset["a"] [0, 10, 20, 1, 11, 21, 2, 12, 22] >>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") >>> dataset["a"] [0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24] >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42) >>> dataset["a"] [10, 0, 11, 1, 2] >>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted") >>> dataset["a"] [10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24] **Final note:** I've been using that code for a research project involving a large-scale multilingual dataset. One should be careful when using oversampling to avoid to avoid exploding the size of the dataset. For example, if a very large data set has a low probability of being sampled, the final dataset may be several times the size of that large data set.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4831/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4831/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4830/comments
https://api.github.com/repos/huggingface/datasets/issues/4830/events
https://github.com/huggingface/datasets/pull/4830
1,336,177,937
PR_kwDODunzps49Cdro
4,830
Fix task tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The non-passing tests are caused by other missing information in the dataset cards." ]
2022-08-11T16:06:06Z
2022-08-11T16:37:27Z
2022-08-11T16:23:00Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4830.diff", "html_url": "https://github.com/huggingface/datasets/pull/4830", "merged_at": "2022-08-11T16:23:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/4830.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4830" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4830/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4830/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4829/comments
https://api.github.com/repos/huggingface/datasets/issues/4829/events
https://github.com/huggingface/datasets/issues/4829
1,336,068,068
I_kwDODunzps5Posfk
4,829
Misalignment between card tag validation and docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "(Note that the doc is aligned with the hub validation rules, and the \"ground truth\" is the hub validation rules given that they apply to all datasets, not just the canonical ones)" ]
2022-08-11T14:44:45Z
2022-08-11T14:46:35Z
null
MEMBER
null
null
null
## Describe the bug As pointed out in other issue: https://github.com/huggingface/datasets/pull/4827#discussion_r943536284 the validation of the dataset card tags is not aligned with its documentation: e.g. - implementation: `license: List[str]` - docs: `license: Union[str, List[str]]` They should be aligned. CC: @julien-c
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4829/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4828/comments
https://api.github.com/repos/huggingface/datasets/issues/4828/events
https://github.com/huggingface/datasets/pull/4828
1,336,040,168
PR_kwDODunzps49B_vb
4,828
Support PIL Image objects in `add_item`/`add_column`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4828). All of your documentation changes will be reflected on that endpoint." ]
2022-08-11T14:25:45Z
2022-10-19T09:50:32Z
null
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4828.diff", "html_url": "https://github.com/huggingface/datasets/pull/4828", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4828.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4828" }
Fix #4796 PS: We should also improve the type inference in `OptimizedTypeSequence` to make it possible to also infer the complex types (only `Image` currently) in nested arrays (e.g. `[[pil_image], [pil_image, pil_image]]` or `[{"img": pil_image}`]), but I plan to address this in a separate PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4828/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4828/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4827/comments
https://api.github.com/repos/huggingface/datasets/issues/4827/events
https://github.com/huggingface/datasets/pull/4827
1,335,994,312
PR_kwDODunzps49B1zi
4,827
Add license metadata to pg19
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-11T13:52:20Z
2022-08-11T15:01:03Z
2022-08-11T14:46:38Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4827.diff", "html_url": "https://github.com/huggingface/datasets/pull/4827", "merged_at": "2022-08-11T14:46:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/4827.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4827" }
As reported over email by Roy Rijkers
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4827/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4827/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4826/comments
https://api.github.com/repos/huggingface/datasets/issues/4826/events
https://github.com/huggingface/datasets/pull/4826
1,335,987,583
PR_kwDODunzps49B0V3
4,826
Fix language tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The non-passing tests are caused by other missing information in the dataset cards." ]
2022-08-11T13:47:14Z
2022-08-11T14:17:48Z
2022-08-11T14:03:12Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4826.diff", "html_url": "https://github.com/huggingface/datasets/pull/4826", "merged_at": "2022-08-11T14:03:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/4826.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4826" }
Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4826/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4826/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4825/comments
https://api.github.com/repos/huggingface/datasets/issues/4825/events
https://github.com/huggingface/datasets/pull/4825
1,335,856,882
PR_kwDODunzps49BYWL
4,825
[Windows] Fix Access Denied when using os.rename()
{ "avatar_url": "https://avatars.githubusercontent.com/u/8703022?v=4", "events_url": "https://api.github.com/users/DougTrajano/events{/privacy}", "followers_url": "https://api.github.com/users/DougTrajano/followers", "following_url": "https://api.github.com/users/DougTrajano/following{/other_user}", "gists_url": "https://api.github.com/users/DougTrajano/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DougTrajano", "id": 8703022, "login": "DougTrajano", "node_id": "MDQ6VXNlcjg3MDMwMjI=", "organizations_url": "https://api.github.com/users/DougTrajano/orgs", "received_events_url": "https://api.github.com/users/DougTrajano/received_events", "repos_url": "https://api.github.com/users/DougTrajano/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DougTrajano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DougTrajano/subscriptions", "type": "User", "url": "https://api.github.com/users/DougTrajano" }
[]
closed
false
null
[]
null
[ "Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?", "> Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?\r\n\r\nYes, I think that could be a better solution, but I didn't test it in Linux (e.g. Ubuntu) to guarantee that `os.rename()` could be completely replaced by `shutil.move()`.", "AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)", "> AFAIK `shutil.move` does call `os.rename` first before doing extra work to make it work on windows, so this is should be a safe safe change for linux ;)\r\n\r\nalright, let me change the PR then.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4825). All of your documentation changes will be reflected on that endpoint.", "Hi @lhoestq looks like one of the tests failed, but is not related to this change, do I need to do something from my side?" ]
2022-08-11T11:57:15Z
2022-08-24T13:09:07Z
2022-08-24T13:09:07Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4825.diff", "html_url": "https://github.com/huggingface/datasets/pull/4825", "merged_at": "2022-08-24T13:09:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4825.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4825" }
In this PR, we are including an additional step when `os.rename()` raises a PermissionError. Basically, we will use `shutil.move()` on the temp files. Fix #2937
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4825/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4825/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4824/comments
https://api.github.com/repos/huggingface/datasets/issues/4824/events
https://github.com/huggingface/datasets/pull/4824
1,335,826,639
PR_kwDODunzps49BR5H
4,824
Fix titles in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The non-passing tests are caused by other missing information in the dataset cards." ]
2022-08-11T11:27:48Z
2022-08-11T13:46:11Z
2022-08-11T12:56:49Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4824.diff", "html_url": "https://github.com/huggingface/datasets/pull/4824", "merged_at": "2022-08-11T12:56:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/4824.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4824" }
Fix all the titles in the dataset cards, so that they conform to the required format.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4824/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4824/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4823/comments
https://api.github.com/repos/huggingface/datasets/issues/4823/events
https://github.com/huggingface/datasets/pull/4823
1,335,687,033
PR_kwDODunzps49A0O_
4,823
Update data URL in mkqa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-11T09:16:13Z
2022-08-11T09:51:50Z
2022-08-11T09:37:52Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4823.diff", "html_url": "https://github.com/huggingface/datasets/pull/4823", "merged_at": "2022-08-11T09:37:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/4823.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4823" }
Update data URL in mkqa dataset. Fix #4817.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4823/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4823/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4822/comments
https://api.github.com/repos/huggingface/datasets/issues/4822/events
https://github.com/huggingface/datasets/issues/4822
1,335,675,352
I_kwDODunzps5PnMnY
4,822
Moving dataset between namespaces breaks dataset viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "Let's keep open for now. We should try to reproduce" ]
2022-08-11T09:05:30Z
2022-09-16T20:03:09Z
null
CONTRIBUTOR
null
null
null
## Describe the bug I moved a dataset from my own namespace to an org and that broke the dataset viewer. To fix it I had to manually edit the `dataset_info.json` file and change the first key in the json from `username--datasetname` to `orgname--datasetname` ## Steps to reproduce the bug What I did was: 1- Upload a dataset to my own namespace using `push_to_hub` 2- Move the dataset from my namespace to an org using the web interface. ## Expected results For the file to be changed accordingly. ## Actual results Broken dataset viewer. ## Environment info - `datasets` version: 2.3.3.dev0 - Platform: Linux-4.15.0-189-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyArrow version: 7.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4822/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4822/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4821/comments
https://api.github.com/repos/huggingface/datasets/issues/4821/events
https://github.com/huggingface/datasets/pull/4821
1,335,664,588
PR_kwDODunzps49AvaE
4,821
Fix train_test_split docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-11T08:55:45Z
2022-08-11T09:59:29Z
2022-08-11T09:45:40Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4821.diff", "html_url": "https://github.com/huggingface/datasets/pull/4821", "merged_at": "2022-08-11T09:45:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/4821.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4821" }
I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4821/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4821/timeline
null
null
true