url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.11B
node_id
stringlengths
18
32
number
int64
1
3.59k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,643B
updated_at
int64
1,587B
1,643B
closed_at
int64
1,587B
1,643B
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2066/comments
https://api.github.com/repos/huggingface/datasets/issues/2066/events
https://github.com/huggingface/datasets/pull/2066
833,480,551
MDExOlB1bGxSZXF1ZXN0NTk0NDcwMjEz
2,066
Fix docstring rendering of Dataset/DatasetDict.from_csv args
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,965,790,000
1,615,972,881,000
1,615,972,881,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2066", "html_url": "https://github.com/huggingface/datasets/pull/2066", "diff_url": "https://github.com/huggingface/datasets/pull/2066.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2066.patch", "merged_at": 1615972881000 }
Fix the docstring rendering of Dataset/DatasetDict.from_csv args.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2066/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2066/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2065/comments
https://api.github.com/repos/huggingface/datasets/issues/2065/events
https://github.com/huggingface/datasets/issues/2065
833,291,432
MDU6SXNzdWU4MzMyOTE0MzI=
2,065
Only user permission of saved cache files, not group
{ "login": "lorr1", "id": 57237365, "node_id": "MDQ6VXNlcjU3MjM3MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorr1", "html_url": "https://github.com/lorr1", "followers_url": "https://api.github.com/users/lorr1/followers", "following_url": "https://api.github.com/users/lorr1/following{/other_user}", "gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}", "starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lorr1/subscriptions", "organizations_url": "https://api.github.com/users/lorr1/orgs", "repos_url": "https://api.github.com/users/lorr1/repos", "events_url": "https://api.github.com/users/lorr1/events{/privacy}", "received_events_url": "https://api.github.com/users/lorr1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646))\r\n\r\nThat means it keeps the permissions specified by the `tempfile.NamedTemporaryFile` object, i.e. `-rw-------` instead of `-rw-r--r--`. Improving this could be a nice first contribution to the library :)", "Hi @lhoestq,\r\nI looked into this and yes you're right. The `NamedTemporaryFile` is always created with mode 0600, which prevents group from reading the file. Should we change the permissions of `tmp_file.name` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1871) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1590), post creation to 0644 inorder for group and others to read it?", "Good idea :) we could even update the permissions after the file has been moved by shutil.move [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1899) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646) actually.\r\nApparently they set the default 0600 for temporary files for security reasons, so let's update the umask only after the file has been moved", "Would it be possible to actually set the umask based on a user provided argument? For example, a popular usecase my team has is using a shared file-system for processing datasets. This may involve writing/deleting other files, or changing filenames, which a -rw-r--r-- wouldn't fix. ", "Note that you can get the cache files of a dataset with the `cache_files` attributes.\r\nThen you can `chmod` those files and all the other cache files in the same directory.\r\n\r\nMoreover we can probably keep the same permissions after each transform. This way you just need to set the permissions once after doing `load_dataset` for example, and then all the new transformed cached files will have the same permissions.\r\nWhat do you think ?", "This means we'll check the permissions of other `cache_files` already created for a dataset before setting permissions for new `cache_files`?", "You can just check the permission of `dataset.cache_files[0]` imo", "> This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions.\r\n\r\nI was referring to this. Ensuring that newly generated `cache_files` have the same permissions", "Yes exactly\r\n\r\nI imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?", "Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?", "Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?", "Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)", "I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users.\r\n\r\nFor example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use the same dataset on the same shared filesystem, but won't be able to under the default permissions.\r\n\r\nBeing able to specify directly in the top-level `load_dataset()` call seems important, but an equally valid option would be to just inherit from the running user's `umask` (this should probably be the default anyway).\r\n\r\nSo basically, argument that takes a custom set of permissions, and by default, use the running user's umask!", "Maybe let's start by defaulting to the user's umask !\r\nDo you want to give it a try @bhavitvyamalik ?", "Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask\r\n\r\n", "You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.", "FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well.\r\n\r\nthanks @thomwolf for the pointer.", "Hi @stas00,\r\nFor this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?", "That sounds very right to me, @bhavitvyamalik " ]
1,615,940,422,000
1,620,629,129,000
1,620,629,129,000
NONE
null
null
null
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2065/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2064/comments
https://api.github.com/repos/huggingface/datasets/issues/2064/events
https://github.com/huggingface/datasets/pull/2064
833,002,360
MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1
2,064
Fix ted_talks_iwslt version error
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,913,025,000
1,615,917,608,000
1,615,917,608,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2064", "html_url": "https://github.com/huggingface/datasets/pull/2064", "diff_url": "https://github.com/huggingface/datasets/pull/2064.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2064.patch", "merged_at": 1615917607000 }
This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly. Fixes #2059
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2064/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2063/comments
https://api.github.com/repos/huggingface/datasets/issues/2063/events
https://github.com/huggingface/datasets/pull/2063
832,993,705
MDExOlB1bGxSZXF1ZXN0NTk0MDY2NzI5
2,063
[Common Voice] Adapt dataset script so that no manual data download is actually needed
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,912,424,000
1,615,974,172,000
1,615,974,157,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2063", "html_url": "https://github.com/huggingface/datasets/pull/2063", "diff_url": "https://github.com/huggingface/datasets/pull/2063.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2063.patch", "merged_at": 1615974157000 }
This PR changes the dataset script so that no manual data dir is needed anymore.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2063/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2062/comments
https://api.github.com/repos/huggingface/datasets/issues/2062/events
https://github.com/huggingface/datasets/pull/2062
832,625,483
MDExOlB1bGxSZXF1ZXN0NTkzNzUyNTMz
2,062
docs: fix missing quotation
{ "login": "neal2018", "id": 46561493, "node_id": "MDQ6VXNlcjQ2NTYxNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/46561493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neal2018", "html_url": "https://github.com/neal2018", "followers_url": "https://api.github.com/users/neal2018/followers", "following_url": "https://api.github.com/users/neal2018/following{/other_user}", "gists_url": "https://api.github.com/users/neal2018/gists{/gist_id}", "starred_url": "https://api.github.com/users/neal2018/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neal2018/subscriptions", "organizations_url": "https://api.github.com/users/neal2018/orgs", "repos_url": "https://api.github.com/users/neal2018/repos", "events_url": "https://api.github.com/users/neal2018/events{/privacy}", "received_events_url": "https://api.github.com/users/neal2018/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,889,274,000
1,615,972,917,000
1,615,972,917,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2062", "html_url": "https://github.com/huggingface/datasets/pull/2062", "diff_url": "https://github.com/huggingface/datasets/pull/2062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2062.patch", "merged_at": 1615972916000 }
The json code misses a quote
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2062/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2061/comments
https://api.github.com/repos/huggingface/datasets/issues/2061/events
https://github.com/huggingface/datasets/issues/2061
832,596,228
MDU6SXNzdWU4MzI1OTYyMjg=
2,061
Cannot load udpos subsets from xtreme dataset using load_dataset()
{ "login": "adzcodez", "id": 55791365, "node_id": "MDQ6VXNlcjU1NzkxMzY1", "avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adzcodez", "html_url": "https://github.com/adzcodez", "followers_url": "https://api.github.com/users/adzcodez/followers", "following_url": "https://api.github.com/users/adzcodez/following{/other_user}", "gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}", "starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions", "organizations_url": "https://api.github.com/users/adzcodez/orgs", "repos_url": "https://api.github.com/users/adzcodez/repos", "events_url": "https://api.github.com/users/adzcodez/events{/privacy}", "received_events_url": "https://api.github.com/users/adzcodez/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.", "Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n> \r\n> The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.\r\n\r\nYou're right: \"_\" should be added to the list of labels, and the examples must be sequences of tokens, not singles tokens.\r\n", "@lhoestq Can you please label this issue with the \"good first issue\" label? I'm not sure I'll find time to fix this.\r\n\r\nTo resolve it, the user should:\r\n1. add `\"_\"` to the list of labels\r\n2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204))\r\n3. update the dummy data\r\n4. update the dataset info\r\n5. [optional] add info about the data fields structure of the udpos subset to the dataset readme", "I tried fixing this issue, but its working fine in the dev version : \"1.6.2.dev0\"\r\n\r\nI think somebody already fixed it. ", "Hi,\r\n\r\nafter #2326, the lines with pos tags equal to `\"_\"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free to borrow the logic from [here](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204) if you decide to work on this). ", "Closed by #2466." ]
1,615,887,133,000
1,624,017,251,000
1,624,017,250,000
NONE
null
null
null
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error. Reprex is: `from datasets import load_dataset ` `dataset = load_dataset('xtreme', 'udpos.English')` The error is: `KeyError: '_'` The full traceback is: KeyError Traceback (most recent call last) <ipython-input-5-7181359ea09d> in <module> 1 from datasets import load_dataset ----> 2 dataset = load_dataset('xtreme', 'udpos.English') ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 738 739 # Download and prepare data --> 740 builder_instance.download_and_prepare( 741 download_config=download_config, 742 download_mode=download_mode, ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 576 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 577 if not downloaded_from_gcs: --> 578 self._download_and_prepare( 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 654 try: 655 # Prepare split will record examples associated to the split --> 656 self._prepare_split(split_generator, **prepare_split_kwargs) 657 except OSError as e: 658 raise OSError( ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator) 977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 978 ): --> 979 example = self.info.features.encode_example(record) 980 writer.write(example) 981 finally: ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example) 946 def encode_example(self, example): 947 example = cast_to_python_objects(example) --> 948 return encode_nested_example(self, example) 949 950 def encode_batch(self, batch): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 840 # Nested structures: we allow dict, list/tuples, sequences 841 if isinstance(schema, dict): --> 842 return { 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0) 841 if isinstance(schema, dict): 842 return { --> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } 845 elif isinstance(schema, (list, tuple)): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 870 return schema.encode_example(obj) 871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 872 return obj ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data) 647 # If a string is given, convert to associated integer 648 if isinstance(example_data, str): --> 649 example_data = self.str2int(example_data) 650 651 # Allowing -1 to mean no label. ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values) 605 if value not in self._str2int: 606 value = value.strip() --> 607 output.append(self._str2int[str(value)]) 608 else: 609 # No names provided, try to integerize KeyError: '_'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2061/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2060/comments
https://api.github.com/repos/huggingface/datasets/issues/2060/events
https://github.com/huggingface/datasets/pull/2060
832,588,591
MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx
2,060
Filtering refactor
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false } ]
null
[ "I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate 👀 \r\n\r\nI'm not familiar with the caching you describe for `.map`, I'll look it up.", "turns out the multi proc issue is also on master, I won't fix it in this PR but opened #2071 to track the problem.", "tracemalloc outputs from this script:\r\n\r\n```python\r\nimport logging\r\nimport sys\r\nimport time\r\nimport tracemalloc\r\n\r\nfrom datasets import load_dataset, set_caching_enabled\r\n\r\n\r\nif __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n\r\n tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n\r\n now = time.time()\r\n try:\r\n snapshot1 = tracemalloc.take_snapshot()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n exit(1)\r\n snapshot2 = tracemalloc.take_snapshot()\r\n tracemalloc.stop()\r\n elapsed = time.time() - now\r\n\r\n print(elapsed)\r\n top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n\r\n print(\"[ Top 10 differences ]\")\r\n for stat in top_stats[:10]:\r\n print(stat)\r\n\r\n```\r\n\r\n\r\nThis branch:\r\n\r\n```\r\n ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 10:23:20.051255: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n 2021-03-23 10:23:20.051304: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.\r\n 100%|████████████████████████████████████| 74005/74005 [12:16<00:00, 100.54ba/s]\r\n 815.6356580257416\r\n [ Top 10 differences ]\r\n <frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326226 (+307928), average=122 B\r\n <frozen importlib._bootstrap>:219: size=7643 KiB (+7553 KiB), count=26372 (+25473), average=297 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3428 (+3384), average=310 B\r\n <frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=998 KiB (+487 KiB), count=9551 (+4517), average=107 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nOn master:\r\n```\r\n ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 12:26:47.219622: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n 2021-03-23 12:26:47.219669: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.\r\n 100%|███████████████████████████████████| 74005/74005 [1:02:17<00:00, 19.80ba/s]\r\n 3738.870892047882\r\n [ Top 10 differences ]\r\n <frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326221 (+307919), average=122 B\r\n <frozen importlib._bootstrap>:219: size=7648 KiB (+7557 KiB), count=26455 (+25555), average=296 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3429 (+3385), average=310 B\r\n <frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=1000 KiB (+489 KiB), count=9569 (+4535), average=107 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nI'm not concluding much, it seems nothing is really happening to memory on `pyarrow::Table.filter`? ", "Cool ! Maybe it increases the memory a bit but what's brought in memory is not the resulting Table but something else (not sure what though).\r\nWhat's the length of the resulting dataset ?\r\nYou can also take a look at `pyarrow.total_allocated_memory()` to show how much memory is being used by pyarrow", "```diff\r\ndiff --git a/benchmarks/benchmark_filter.py b/benchmarks/benchmark_filter.py\r\nindex 4b9efd4e..a862c204 100644\r\n--- a/benchmarks/benchmark_filter.py\r\n+++ b/benchmarks/benchmark_filter.py\r\n@@ -1,6 +1,9 @@\r\n import logging\r\n import sys\r\n import time\r\n+import tracemalloc\r\n+\r\n+import pyarrow as pa\r\n \r\n from datasets import load_dataset, set_caching_enabled\r\n \r\n@@ -9,13 +12,28 @@ if __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n \r\n+ tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n \r\n now = time.time()\r\n try:\r\n+ snapshot1 = tracemalloc.take_snapshot()\r\n+ pamem1 = pa.total_allocated_bytes()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n+ pamem2 = pa.total_allocated_bytes()\r\n+ snapshot2 = tracemalloc.take_snapshot()\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n+ exit(1)\r\n+ tracemalloc.stop()\r\n elapsed = time.time() - now\r\n \r\n print(elapsed)\r\n+ top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n+\r\n+ print(\"[ Top 10 differences ]\")\r\n+ for stat in top_stats[:10]:\r\n+ print(stat)\r\n+\r\n+ print(\"[ pyarrow reporting ]\")\r\n+ print(f\"before: ({pamem1}) after: ({pamem2})\")\r\n```\r\n\r\nthis yields 0-0, does not seem like a good tool 😛 and the documentation is [quite mysterious.](https://arrow.apache.org/docs/python/generated/pyarrow.total_allocated_bytes.html)", "Personally if I use your script to benchmark on this branch\r\n```python\r\nbc = load_dataset(\"bookcorpus\", split=\"train[:1%]\")\r\nbc = bc.filter(lambda x: len(x[\"text\"]) < 64)\r\n```\r\n\r\nthen I get\r\n```\r\n[ pyarrow reporting ]\r\nbefore: (0) after: (15300672)\r\n```\r\n\r\nMaybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n```python\r\nbc[\"train\"] = bc[\"train\"].filter(...)\r\n```\r\nCan you try again on your side just to make sure ?\r\n\r\nEven if the documentation doesn't say much, `pa.total_allocated_bytes` if pretty useful, and also very consistent.\r\nIt tracks the number of bytes used for arrow data.", "> Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n> \r\n> ```python\r\n> bc[\"train\"] = bc[\"train\"].filter(...)\r\n> ```\r\nNice catch! I get 1.74GB for this branch", "Looks like we may need to write the filtered table on the disk then.\r\n\r\nThe other option is to slice the table to keep only the good rows and concatenate them but this is too slow at the moment since slicing is O(n) until #1803 is fixed. I'll work on this issue this afternoon", "From investigation it looks like the lib's `Table.filter` cannot send its output to memorymap, asked a question on the mailing list, see [here](https://lists.apache.org/thread.html/r8cd8591ce83a967eb0097a7f31785ac2f3ee95ea371c8c5beb0720ad%40%3Cuser.arrow.apache.org%3E)", "closing in favor of #2836 " ]
1,615,886,610,000
1,634,116,144,000
1,634,116,143,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2060", "html_url": "https://github.com/huggingface/datasets/pull/2060", "diff_url": "https://github.com/huggingface/datasets/pull/2060.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2060.patch", "merged_at": null }
fix https://github.com/huggingface/datasets/issues/2032 benchmarking is somewhat inconclusive, currently running on `book_corpus` with: ```python bc = load_dataset("bookcorpus") now = time.time() bc.filter(lambda x: len(x["text"]) < 64) elapsed = time.time() - now print(elapsed) ``` this branch does it in 233 seconds, master in 1409 seconds.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2060/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2060/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2059/comments
https://api.github.com/repos/huggingface/datasets/issues/2059/events
https://github.com/huggingface/datasets/issues/2059
832,579,156
MDU6SXNzdWU4MzI1NzkxNTY=
2,059
Error while following docs to load the `ted_talks_iwslt` dataset
{ "login": "ekdnam", "id": 40426312, "node_id": "MDQ6VXNlcjQwNDI2MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekdnam", "html_url": "https://github.com/ekdnam", "followers_url": "https://api.github.com/users/ekdnam/followers", "following_url": "https://api.github.com/users/ekdnam/following{/other_user}", "gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions", "organizations_url": "https://api.github.com/users/ekdnam/orgs", "repos_url": "https://api.github.com/users/ekdnam/repos", "events_url": "https://api.github.com/users/ekdnam/events{/privacy}", "received_events_url": "https://api.github.com/users/ekdnam/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "@skyprince999 as you authored the PR for this dataset, any comments?", "This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)" ]
1,615,885,939,000
1,615,917,631,000
1,615,917,607,000
NONE
null
null
null
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error attached below. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-7dcc67154ef9> in <module>() ----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") 4 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 730 hash=hash, 731 features=features, --> 732 **config_kwargs, 733 ) 734 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs) 927 928 def __init__(self, *args, writer_batch_size=None, **kwargs): --> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) 930 # Batch size used by the ArrowWriter 931 # It defines the number of samples that are kept in memory before writing them /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs) 241 name, 242 custom_features=features, --> 243 **config_kwargs, 244 ) 245 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 338 config_kwargs["version"] = self.VERSION --> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 340 341 # otherwise use the config_kwargs to overwrite the attributes /root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs) 219 description=description, 220 version=datasets.Version("1.1.0", ""), --> 221 **kwargs, 222 ) 223 TypeError: __init__() got multiple values for keyword argument 'version' ``` How to resolve this? PS: Thanks a lot @huggingface team for creating this great library!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2059/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2058/comments
https://api.github.com/repos/huggingface/datasets/issues/2058/events
https://github.com/huggingface/datasets/issues/2058
832,159,844
MDU6SXNzdWU4MzIxNTk4NDQ=
2,058
Is it possible to convert a `tfds` to HuggingFace `dataset`?
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "repos_url": "https://api.github.com/users/abarbosa94/repos", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,615,839,527,000
1,615,839,527,000
null
CONTRIBUTOR
null
null
null
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :) I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2058/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2057/comments
https://api.github.com/repos/huggingface/datasets/issues/2057/events
https://github.com/huggingface/datasets/pull/2057
832,120,522
MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0
2,057
update link to ZEST dataset
{ "login": "matt-peters", "id": 619844, "node_id": "MDQ6VXNlcjYxOTg0NA==", "avatar_url": "https://avatars.githubusercontent.com/u/619844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matt-peters", "html_url": "https://github.com/matt-peters", "followers_url": "https://api.github.com/users/matt-peters/followers", "following_url": "https://api.github.com/users/matt-peters/following{/other_user}", "gists_url": "https://api.github.com/users/matt-peters/gists{/gist_id}", "starred_url": "https://api.github.com/users/matt-peters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matt-peters/subscriptions", "organizations_url": "https://api.github.com/users/matt-peters/orgs", "repos_url": "https://api.github.com/users/matt-peters/repos", "events_url": "https://api.github.com/users/matt-peters/events{/privacy}", "received_events_url": "https://api.github.com/users/matt-peters/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,836,177,000
1,615,914,388,000
1,615,914,388,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2057", "html_url": "https://github.com/huggingface/datasets/pull/2057", "diff_url": "https://github.com/huggingface/datasets/pull/2057.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2057.patch", "merged_at": 1615914388000 }
Updating the link as the original one is no longer working.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2057/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2057/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2056/comments
https://api.github.com/repos/huggingface/datasets/issues/2056/events
https://github.com/huggingface/datasets/issues/2056
831,718,397
MDU6SXNzdWU4MzE3MTgzOTc=
2,056
issue with opus100/en-fr dataset
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ", "Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import MT5TokenizerFast\r\n\r\ndef get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer):\r\n datasets = load_dataset(dataset_name, dataset_config_name, script_version=\"master\")\r\n column_names = datasets[\"train\"].column_names\r\n text_column_name = \"translation\"\r\n def process_dataset(datasets):\r\n def process_function(examples):\r\n lang = \"fr\"\r\n return {\"src_texts\": [example[lang] for example in examples[text_column_name]]}\r\n datasets = datasets.map(\r\n process_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n )\r\n return datasets\r\n datasets = process_dataset(datasets)\r\n text_column_name = \"src_texts\"\r\n column_names = [\"src_texts\"]\r\n def tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n tokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True\r\n )\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer_kwargs = {\r\n \"cache_dir\": None,\r\n \"use_fast\": True,\r\n \"revision\": \"main\",\r\n \"use_auth_token\": None\r\n }\r\n tokenizer = MT5TokenizerFast.from_pretrained(\"google/mt5-small\", **tokenizer_kwargs)\r\n get_tokenized_dataset(dataset_name=\"opus100\", dataset_config_name=\"en-fr\", tokenizer=tokenizer)\r\n~ \r\n```", "as per https://github.com/huggingface/tokenizers/issues/626 this looks like to be the tokenizer bug, I therefore, reported it there https://github.com/huggingface/tokenizers/issues/626 and I am closing this one." ]
1,615,807,962,000
1,615,909,740,000
1,615,909,739,000
NONE
null
null
null
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace 63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s] Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 412, in main in zip(data_args.dataset_name, data_args.dataset_config_name)] File "run_mlm.py", line 411, in <listcomp> logger) for dataset_name, dataset_config_name\ File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset load_from_cache_file=not data_args.overwrite_cache, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp> for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map update_data=update_data, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function return tokenizer(examples[text_column_name], return_special_tokens_mask=True) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__ **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus is_pretokenized=is_split_into_words, pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617 `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2056/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2055/comments
https://api.github.com/repos/huggingface/datasets/issues/2055/events
https://github.com/huggingface/datasets/issues/2055
831,684,312
MDU6SXNzdWU4MzE2ODQzMTI=
2,055
is there a way to override a dataset object saved with save_to_disk?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi\r\nYou can rename the arrow file and update the name in `state.json`", "I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_dataset.map(\r\n partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=self.context_tokenizer),\r\n batched=True,\r\n batch_size=1,\r\n features=new_features,\r\n cache_file_name=cache_arrow_path,\r\n load_from_cache_file=False\r\n )\r\n```\r\nSo here we set a cache_file_name , after this it uses the same file name when saving again and again. ", "I'm not sure I understand your issue, can you elaborate ?\r\n\r\n`cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.", "Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset object every time with a random file name, especially when we do some transformations to dataset objects such as map or shards. This way, we keep collecting unwanted files that will eventually eat up all the disk space. \r\n\r\nBut if we can save the dataset object every time by a single name like **data_shard_1.arrow**, it will automatically remove the previous file and save the new one in the same directory. I found the above-mentioned code snippet useful to complete this task. \r\n\r\nIs this clear?" ]
1,615,805,453,000
1,616,385,977,000
1,616,385,977,000
NONE
null
null
null
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2055/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2054/comments
https://api.github.com/repos/huggingface/datasets/issues/2054/events
https://github.com/huggingface/datasets/issues/2054
831,597,665
MDU6SXNzdWU4MzE1OTc2NjU=
2,054
Could not find file for ZEST dataset
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.", "This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)", "Thanks @lhoestq and @matt-peters ", "I am closing this issue since its fixed!" ]
1,615,799,518,000
1,620,034,224,000
1,620,034,224,000
CONTRIBUTOR
null
null
null
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-6-18dbbc1a4b8a> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("zest") 9 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 612 ) 613 elif response is not None and response.status_code == 404: --> 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 616 raise ConnectionError("Couldn't reach {}".format(url)) FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2054/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2053/comments
https://api.github.com/repos/huggingface/datasets/issues/2053/events
https://github.com/huggingface/datasets/pull/2053
831,151,728
MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2
2,053
Add bAbI QA tasks
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nShould I remove the 160 configurations? Is it too much?\r\n\r\nEDIT:\r\nCan you also check the task category? I'm not sure if there is an appropriate tag for the same.", "Thanks for the changes !\r\n\r\n> Should I remove the 160 configurations? Is it too much?\r\n\r\nYea 160 configuration is a lot.\r\nMaybe this dataset can work with parameters `type` and `task_no` ?\r\nYou can just remove the configuration in BUILDER_CONFIGS to only keep a few ones.\r\nAlso feel free to add an example in the dataset card of how to load the other configurations\r\n```\r\nload_dataset(\"babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nfor example, and with a list of the possible combinations.\r\n\r\n> Can you also check the task category? I'm not sure if there is an appropriate tag for the same.\r\n\r\nIt looks appropriate, thanks :)", "Hi @lhoestq \r\n\r\nI'm unable to test it locally using:\r\n```python\r\nload_dataset(\"datasets/babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nIt raises an error:\r\n```python\r\nTypeError: __init__() got an unexpected keyword argument 'type'\r\n```\r\nWill this be possible only after merging? Or am I missing something here?", "Can you try adding this class attribute to `BabiQa` ?\r\n```python\r\nBUILDER_CONFIG_CLASS = BabiQaConfig\r\n```\r\nThis should fix the TypeError issue you got", "My bad. Thanks a lot!", "Hi @lhoestq \r\n\r\nI have added the changes. Only the \"qa1\" task for each category is included. Also, I haven't removed the size categories and other description because I think it will still be useful. I have updated the line in README showing the example.\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq,\r\n\r\nDoes this look good now?" ]
1,615,727,079,000
1,617,021,708,000
1,617,021,708,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2053", "html_url": "https://github.com/huggingface/datasets/pull/2053", "diff_url": "https://github.com/huggingface/datasets/pull/2053.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2053.patch", "merged_at": 1617021708000 }
- **Name:** *The (20) QA bAbI tasks* - **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.* - **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf) - **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/) - **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research. **Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done. Thanks :) ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2053/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2053/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2052/comments
https://api.github.com/repos/huggingface/datasets/issues/2052/events
https://github.com/huggingface/datasets/issues/2052
831,135,704
MDU6SXNzdWU4MzExMzU3MDQ=
2,052
Timit_asr dataset repeats examples
{ "login": "fermaat", "id": 7583522, "node_id": "MDQ6VXNlcjc1ODM1MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fermaat", "html_url": "https://github.com/fermaat", "followers_url": "https://api.github.com/users/fermaat/followers", "following_url": "https://api.github.com/users/fermaat/following{/other_user}", "gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}", "starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fermaat/subscriptions", "organizations_url": "https://api.github.com/users/fermaat/orgs", "repos_url": "https://api.github.com/users/fermaat/repos", "events_url": "https://api.github.com/users/fermaat/events{/privacy}", "received_events_url": "https://api.github.com/users/fermaat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```", "Ty!" ]
1,615,722,223,000
1,615,804,636,000
1,615,804,636,000
NONE
null
null
null
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text'] #['Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', ``` The same behavior happens for other columns Expected behavior: Different info on the actual timit_asr dataset Actual behavior: When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different Debug info Streamlit version: (get it with $ streamlit version) Python version: Python 3.6.12 Using Conda? PipEnv? PyEnv? Pex? Using pip OS version: Centos-release-7-9.2009.1.el7.centos.x86_64 Additional information You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2052/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2051/comments
https://api.github.com/repos/huggingface/datasets/issues/2051/events
https://github.com/huggingface/datasets/pull/2051
831,027,021
MDExOlB1bGxSZXF1ZXN0NTkyNDQ2MDU1
2,051
Add MDD Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nI have added changes from review.", "Thanks for approving :)" ]
1,615,680,065,000
1,616,152,544,000
1,616,149,919,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2051", "html_url": "https://github.com/huggingface/datasets/pull/2051", "diff_url": "https://github.com/huggingface/datasets/pull/2051.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2051.patch", "merged_at": 1616149919000 }
- **Name:** *MDD Dataset* - **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb. - **Paper:** [arXiv](https://arxiv.org/pdf/1511.06931.pdf) - **Data:** https://research.fb.com/downloads/babi/ - **Motivation:** This is one of the popular dialog datasets, a part of Facebook Research's "bAbI project". ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass. **Note**: I haven't included the following from the data files: `entities` (the file containing list of all entities in the first three subtasks), `dictionary`(the dictionary of words they use in their models), `movie_kb`(contains the knowledge base of information about the movies, actors and other entities that are mentioned in the dialogs). Please let me know if those are needed, and if yes, should I make separate configurations for them?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2051/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2050/comments
https://api.github.com/repos/huggingface/datasets/issues/2050/events
https://github.com/huggingface/datasets/issues/2050
831,006,551
MDU6SXNzdWU4MzEwMDY1NTE=
2,050
Build custom dataset to fine-tune Wav2Vec2
{ "login": "Omarnabk", "id": 72882909, "node_id": "MDQ6VXNlcjcyODgyOTA5", "avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Omarnabk", "html_url": "https://github.com/Omarnabk", "followers_url": "https://api.github.com/users/Omarnabk/followers", "following_url": "https://api.github.com/users/Omarnabk/following{/other_user}", "gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}", "starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions", "organizations_url": "https://api.github.com/users/Omarnabk/orgs", "repos_url": "https://api.github.com/users/Omarnabk/repos", "events_url": "https://api.github.com/users/Omarnabk/events{/privacy}", "received_events_url": "https://api.github.com/users/Omarnabk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "@lhoestq - We could simply use the \"general\" json dataset for this no? ", "Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\ntest_dataset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n```\r\n\r\nYou just need to make sure that the data contain the paths to the audio files.\r\nIf not, feel free to use `.map()` to add them.", "Many thanks! that was what I was looking for. " ]
1,615,672,870,000
1,615,800,448,000
1,615,800,448,000
NONE
null
null
null
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2050/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2049/comments
https://api.github.com/repos/huggingface/datasets/issues/2049/events
https://github.com/huggingface/datasets/pull/2049
830,978,687
MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0
2,049
Fix text-classification tags
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "LGTM, thanks for fixing." ]
1,615,665,102,000
1,615,909,666,000
1,615,909,666,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2049", "html_url": "https://github.com/huggingface/datasets/pull/2049", "diff_url": "https://github.com/huggingface/datasets/pull/2049.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2049.patch", "merged_at": 1615909666000 }
There are different tags for text classification right now: `text-classification` and `text_classification`: ![image](https://user-images.githubusercontent.com/29076344/111042457-856bdf00-8463-11eb-93c9-50a30106a1a1.png). This PR fixes it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2049/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
https://api.github.com/repos/huggingface/datasets/issues/2048/events
https://github.com/huggingface/datasets/issues/2048
830,953,431
MDU6SXNzdWU4MzA5NTM0MzE=
2,048
github is not always available - probably need a back up
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,615,658,612,000
1,615,658,612,000
null
CONTRIBUTOR
null
null
null
Yesterday morning github wasn't working: ``` :/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2021-03-12 18:36:11 ERROR 500: Internal Server Error. ``` Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2047/comments
https://api.github.com/repos/huggingface/datasets/issues/2047/events
https://github.com/huggingface/datasets/pull/2047
830,626,430
MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3
2,047
Multilingual dIalogAct benchMark (miam)
{ "login": "eusip", "id": 1551356, "node_id": "MDQ6VXNlcjE1NTEzNTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eusip", "html_url": "https://github.com/eusip", "followers_url": "https://api.github.com/users/eusip/followers", "following_url": "https://api.github.com/users/eusip/following{/other_user}", "gists_url": "https://api.github.com/users/eusip/gists{/gist_id}", "starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eusip/subscriptions", "organizations_url": "https://api.github.com/users/eusip/orgs", "repos_url": "https://api.github.com/users/eusip/repos", "events_url": "https://api.github.com/users/eusip/events{/privacy}", "received_events_url": "https://api.github.com/users/eusip/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-)", "I will run isort again. Hopefully it resolves the current check_code_quality test failure.", "Once the review period is over, feel free to open a PR to add all the missing information ;)", "Hi! I will follow up right now with one more pull request as I have new anonymous citation information to include." ]
1,615,590,175,000
1,616,495,794,000
1,616,150,833,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2047", "html_url": "https://github.com/huggingface/datasets/pull/2047", "diff_url": "https://github.com/huggingface/datasets/pull/2047.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2047.patch", "merged_at": 1616150833000 }
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2047/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2047/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2046/comments
https://api.github.com/repos/huggingface/datasets/issues/2046/events
https://github.com/huggingface/datasets/issues/2046
830,423,033
MDU6SXNzdWU4MzA0MjMwMzM=
2,046
add_faisis_index gets very slow when doing it interatively
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?", "Hi,\r\n I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. \r\n \r\n I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare the time taken to finish the add_faiss_index process on use_own_knowleldge_dataset.py vs the training loop thing. The training loop thing takes 40 mins more. It might be natural right? \r\n \r\n \r\n at the moment it uses around 40 cores of a 96 core machine (I am fine-tuning the entire process). ", "Can you try to set the number of threads manually ?\r\nIf you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time.\r\nYou can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls", "Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hrs and 30 mins. If there is any way to faster the process, an end-to-end rag will be perfect. So I will also try out with different thread numbers too. \r\n\r\n![image](https://user-images.githubusercontent.com/16892570/111453464-798c5f80-8778-11eb-86d0-19d212f58e38.png)\r\n", "@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.\r\n\r\nhttps://github.com/matsui528/faiss_tips", "@lhoestq \r\n\r\nHi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. \r\n\r\n Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends entirely on the number of CPU processes. Then I set the number of threads as you have mentioned and got actually the same time for RAG training and independat running. So you are correct! :) \r\n\r\n \r\n Then I added this [issue in Faiss repostiary](https://github.com/facebookresearch/faiss/issues/1767). I got an answer saying our current **IndexHNSWFlat** can get slow for 30 million vectors and it would be better to use alternatives. What do you think?", "It's a matter of tradeoffs.\r\nHSNW is fast at query time but takes some time to build.\r\nA flat index is flat to build but is \"slow\" at query time.\r\nAn IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).\r\n\r\nNote that for an IVF index you would need to have an `nprobe` parameter (number of cells to visit for one query, there are `nlist` in total) that is not too small in order to have good retrieval accuracy, but not too big otherwise the queries will take too much time. From the faiss documentation:\r\n> The nprobe parameter is always a way of adjusting the tradeoff between speed and accuracy of the result. Setting nprobe = nlist gives the same result as the brute-force search (but slower).\r\n\r\nFrom my experience with indexes on DPR embeddings, setting nprobe around 1/4 of nlist gives really good retrieval accuracy and there's no need to have a value higher than that (or you would need to brute-force in order to see a difference).", "@lhoestq \r\n\r\nThanks a lot for sharing all this prior knowledge. \r\n\r\nJust asking what would be a good nlist of parameters for 30 million embeddings?", "When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.\r\nFor more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset)", "Thanks a lot. I was lost with calling the index from class and using faiss_index_factory. ", "@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. " ]
1,615,580,838,000
1,616,624,951,000
1,616,624,951,000
NONE
null
null
null
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster? @lhoestq ``` def training_step(self, batch, batch_idx) -> Dict: if (not batch_idx==0) and (batch_idx%5==0): print("******************************************************") ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff list_of_gpus = ['cuda:2','cuda:3'] c_dir='/custom/cache/dir' kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir) print(kb_dataset) n=len(list_of_gpus) #nunber of dedicated GPUs kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)] #kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir') print(self.trainer.global_rank) dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank]) output = [None for _ in list_of_gpus] #self.trainer.accelerator_connector.accelerator.barrier("embedding_process") dist.all_gather_object(output, dataset_shards) #This creation and re-initlaization of the new index if (self.trainer.global_rank==0): #saving will be done in the main process combined_dataset = concatenate_datasets(output) passages_path =self.config.passages_path logger.info("saving the dataset with ") #combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage') combined_dataset.save_to_disk(passages_path) logger.info("Add faiss index to the dataset that consist of embeddings") embedding_dataset=combined_dataset index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT) embedding_dataset.add_faiss_index("embeddings", custom_index=index) embedding_dataset.get_index("embeddings").save(self.config.index_path)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2046/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2045/comments
https://api.github.com/repos/huggingface/datasets/issues/2045/events
https://github.com/huggingface/datasets/pull/2045
830,351,527
MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz
2,045
Preserve column ordering in Dataset.rename_column
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Not sure why CI isn't triggered.\r\n\r\n@lhoestq Can you please help me with this? ", "I don't know how to trigger it manually, but an empty commit should do the job" ]
1,615,573,607,000
1,615,906,085,000
1,615,905,305,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2045", "html_url": "https://github.com/huggingface/datasets/pull/2045", "diff_url": "https://github.com/huggingface/datasets/pull/2045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2045.patch", "merged_at": 1615905305000 }
Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns: ```python >>> from datasets import Dataset >>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]}) >>> d Dataset({ features: ['sentences', 'label'], num_rows: 2 }) >>> d.rename_column('sentences', 'text') Dataset({ features: ['label', 'text'], num_rows: 2 }) ``` This PR fixes this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2045/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2044/comments
https://api.github.com/repos/huggingface/datasets/issues/2044/events
https://github.com/huggingface/datasets/pull/2044
830,339,905
MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1
2,044
Add CBT dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nI have added changes from the review.", "Thanks for approving @lhoestq " ]
1,615,572,259,000
1,616,152,213,000
1,616,149,755,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2044", "html_url": "https://github.com/huggingface/datasets/pull/2044", "diff_url": "https://github.com/huggingface/datasets/pull/2044.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2044.patch", "merged_at": 1616149755000 }
This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301). Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags. The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space. Let me know in case of any issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2044/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2043/comments
https://api.github.com/repos/huggingface/datasets/issues/2043/events
https://github.com/huggingface/datasets/pull/2043
830,279,098
MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz
2,043
Support pickle protocol for dataset splits defined as ReadInstruction
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading.", "Yes right ! I read it wrong.\r\nPerfect then" ]
1,615,566,911,000
1,615,904,738,000
1,615,903,505,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2043", "html_url": "https://github.com/huggingface/datasets/pull/2043", "diff_url": "https://github.com/huggingface/datasets/pull/2043.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2043.patch", "merged_at": 1615903505000 }
Fixes #2022 (+ some style fixes)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2043/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2042
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2042/comments
https://api.github.com/repos/huggingface/datasets/issues/2042/events
https://github.com/huggingface/datasets/pull/2042
830,190,276
MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3
2,042
Fix arrow memory checks issue in tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,560,592,000
1,615,561,463,000
1,615,561,462,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2042", "html_url": "https://github.com/huggingface/datasets/pull/2042", "diff_url": "https://github.com/huggingface/datasets/pull/2042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2042.patch", "merged_at": 1615561462000 }
The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory. From my experiments, the tests fail only when the full test suite is ran. This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests. Collecting the garbage collector before checking the arrow memory usage seems to fix this issue. I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2042/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2041/comments
https://api.github.com/repos/huggingface/datasets/issues/2041/events
https://github.com/huggingface/datasets/pull/2041
830,180,803
MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw
2,041
Doc2dial update data_infos and data_loaders
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,559,969,000
1,615,892,960,000
1,615,892,960,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2041", "html_url": "https://github.com/huggingface/datasets/pull/2041", "diff_url": "https://github.com/huggingface/datasets/pull/2041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2041.patch", "merged_at": 1615892960000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2041/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2040/comments
https://api.github.com/repos/huggingface/datasets/issues/2040/events
https://github.com/huggingface/datasets/issues/2040
830,169,387
MDU6SXNzdWU4MzAxNjkzODc=
2,040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.", "Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive/MyDrive/data_target_task/dataset_b/'", "In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```", "Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! " ]
1,615,559,220,000
1,628,100,043,000
1,628,100,043,000
NONE
null
null
null
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yielding the following error: ```python ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho... `load_from_disk(PATH_DATA_CLS_A)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 785 }) ``` `load_from_disk(PATH_DATA_CLS_B)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 3341 }) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2040/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2039/comments
https://api.github.com/repos/huggingface/datasets/issues/2039/events
https://github.com/huggingface/datasets/pull/2039
830,047,652
MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3
2,039
Doc2dial rc
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,550,188,000
1,615,563,156,000
1,615,563,156,000
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2039", "html_url": "https://github.com/huggingface/datasets/pull/2039", "diff_url": "https://github.com/huggingface/datasets/pull/2039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2039.patch", "merged_at": null }
Added fix to handle the last turn that is a user turn.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2039/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
https://api.github.com/repos/huggingface/datasets/issues/2038/events
https://github.com/huggingface/datasets/issues/2038
830,036,875
MDU6SXNzdWU4MzAwMzY4NzU=
2,038
outdated dataset_infos.json might fail verifications
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```", "Fixed by #2041, thanks again @songfeng !" ]
1,615,549,314,000
1,615,912,060,000
1,615,912,060,000
CONTRIBUTOR
null
null
null
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2037/comments
https://api.github.com/repos/huggingface/datasets/issues/2037/events
https://github.com/huggingface/datasets/pull/2037
829,919,685
MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz
2,037
Fix: Wikipedia - save memory by replacing root.clear with elem.clear
{ "login": "miyamonz", "id": 6331508, "node_id": "MDQ6VXNlcjYzMzE1MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyamonz", "html_url": "https://github.com/miyamonz", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "repos_url": "https://api.github.com/users/miyamonz/repos", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it" ]
1,615,540,920,000
1,616,479,696,000
1,615,892,482,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2037", "html_url": "https://github.com/huggingface/datasets/pull/2037", "diff_url": "https://github.com/huggingface/datasets/pull/2037.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2037.patch", "merged_at": 1615892482000 }
see: https://github.com/huggingface/datasets/issues/2031 What I did: - replace root.clear with elem.clear - remove lines to get root element - $ make style - $ make test - some tests required some pip packages, I installed them. test results on origin/master and my branch are same. I think it's not related on my modification, isn't it? ``` ==================================================================================== short test summary info ==================================================================================== FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised ============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ============================================================== make: *** [Makefile:19: test] Error 1 ``` Is there anything else I should do?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2037/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
https://api.github.com/repos/huggingface/datasets/issues/2036/events
https://github.com/huggingface/datasets/issues/2036
829,909,258
MDU6SXNzdWU4Mjk5MDkyNTg=
2,036
Cannot load wikitext
{ "login": "Gpwner", "id": 19349207, "node_id": "MDQ6VXNlcjE5MzQ5MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gpwner", "html_url": "https://github.com/Gpwner", "followers_url": "https://api.github.com/users/Gpwner/followers", "following_url": "https://api.github.com/users/Gpwner/following{/other_user}", "gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions", "organizations_url": "https://api.github.com/users/Gpwner/orgs", "repos_url": "https://api.github.com/users/Gpwner/repos", "events_url": "https://api.github.com/users/Gpwner/events{/privacy}", "received_events_url": "https://api.github.com/users/Gpwner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Solved!" ]
1,615,540,179,000
1,615,797,902,000
1,615,797,884,000
NONE
null
null
null
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2035/comments
https://api.github.com/repos/huggingface/datasets/issues/2035/events
https://github.com/huggingface/datasets/issues/2035
829,475,544
MDU6SXNzdWU4Mjk0NzU1NDQ=
2,035
wiki40b/wikipedia for almost all languages cannot be downloaded
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only chance to be able training the models at scale and I am grateful for your help.\r\n\r\n", "Hi @dorost1234,\r\nTry installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner.\r\n\r\n`dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\n I also read in error stack trace that:\r\n\r\n> Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc.\r\n\r\nWorked perfectly fine after this (Ignore these warnings)\r\n\r\n![image](https://user-images.githubusercontent.com/19718818/110908410-c7e2ce00-8334-11eb-8d10-7354359e9ec3.png)\r\n\r\n", "For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https://dumps.wikimedia.org/bgwiki/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.\r\n\r\n", "Hello @dorost1234,\r\n\r\nIndeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing.\r\n\r\nFor some specific default parameters (English Wikipedia), Hugging Face has already preprocessed the dataset for you (and it is stored in the cloud). That is the reason why you do not get the error for English: the preprocessing is already done by HF and you just get the preprocessed dataset; Apache Beam is not required in that case.", "Hi\nI really appreciate if huggingface can kindly provide preprocessed\ndatasets, processing these datasets require sufficiently large resources\nand I do not have unfortunately access to, and perhaps many others too.\nthanks\n\nOn Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hello @dorost1234 <https://github.com/dorost1234>,\n>\n> Indeed, Wikipedia datasets need a lot of preprocessing and this is done\n> using Apache Beam. That is the reason why it is required that you install\n> Apache Beam in order to preform this preprocessing.\n>\n> For some specific default parameters (English Wikipedia), Hugging Face has\n> already preprocessed the dataset for you (and it is stored in the cloud).\n> That is the reason why you do not get the error for English: the\n> preprocessing is already done by HF and you just get the preprocessed\n> dataset; Apache Beam is not required in that case.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-797310899>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXACFQZAGMK4VGXRETTDHDI3ANCNFSM4ZA5R2UA>\n> .\n>\n", "Hi everyone\r\nthanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours, \r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n`\r\n\r\nDo you know how long this takes? Any specific requirements the machine should have? like very large memory or so? @lhoestq \r\n\r\nthanks \r\n\r\n\r\n", "HI @dorost1234, \r\nThe dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used `download_and_extract` here that's why there's no download progress bar.", "Hi\r\nthanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset:\r\n\r\n`from datasets import load_dataset\r\ndataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\nthe output I see if different also from what you see after writing this command:\r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...`\r\n\r\ndo you have any idea why it might get freezed? anything I am missing @lhoestq @bhavitvyamalik. Do I need maybe to set anything special for apache-beam? \r\n\r\nthanks a lot \r\n\r\nOn Tue, Mar 16, 2021 at 9:03 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> HI @dorost1234 <https://github.com/dorost1234>,\r\n> The dataset size is 631.84 MiB so depending on your internet speed it'll\r\n> take some time. You can monitor your internet speed meanwhile to see if\r\n> it's downloading the dataset or not (use nload if you're using linux/mac\r\n> to monitor the same). In my case it took around 3-4 mins. Since they\r\n> haven't used download_and_extract here that's why there's no download\r\n> progress bar.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800044303>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMQIHNNLM2LGG6QKZ73TD4GDJANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n", "I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me:\r\n```\r\n>>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\nDownloading: 5.26kB [00:00, 1.23MB/s] \r\nDownloading: 1.40kB [00:00, 327kB/s] \r\nDownloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\nWARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\nConnecting anonymously.\r\nWARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n```\r\nAfter around 10 minutes, here's the loading of dataset:\r\n```\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\nDataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n```", "Hi\r\nI honestly also now tried on another machine and nothing shows up after\r\nhours of waiting. Are you sure you have not set any specific setting? maybe\r\ngoogle cloud which seems it is used here, needs some credential setting?\r\nthanks for any suggestions on this\r\n\r\nOn Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> I tried this on another machine (followed the same procedure I've\r\n> mentioned above). This is what it shows (during the freeze period) for me:\r\n>\r\n> >>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\n> Downloading: 5.26kB [00:00, 1.23MB/s]\r\n> Downloading: 1.40kB [00:00, 327kB/s]\r\n> Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n> WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\n> Connecting anonymously.\r\n> WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n>\r\n> After around 10 minutes, here's the loading of dataset:\r\n>\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\n> Dataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800081772>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMX6A2ZTRZUIIZVFRCDTD4NC3ANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n" ]
1,615,492,494,000
1,615,906,417,000
null
NONE
null
null
null
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources. thank you very much. ``` (fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f... Traceback (most recent call last): File "test_data.py", line 3, in <module> dataset = load_dataset("wiki40b", "cs") File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare import apache_beam as beam File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module> from apache_beam import io File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module> from apache_beam.io.avroio import * File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module> import avro File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module> File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2035/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2034/comments
https://api.github.com/repos/huggingface/datasets/issues/2034/events
https://github.com/huggingface/datasets/pull/2034
829,381,388
MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw
2,034
Fix typo
{ "login": "pcyin", "id": 3413464, "node_id": "MDQ6VXNlcjM0MTM0NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pcyin", "html_url": "https://github.com/pcyin", "followers_url": "https://api.github.com/users/pcyin/followers", "following_url": "https://api.github.com/users/pcyin/following{/other_user}", "gists_url": "https://api.github.com/users/pcyin/gists{/gist_id}", "starred_url": "https://api.github.com/users/pcyin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcyin/subscriptions", "organizations_url": "https://api.github.com/users/pcyin/orgs", "repos_url": "https://api.github.com/users/pcyin/repos", "events_url": "https://api.github.com/users/pcyin/events{/privacy}", "received_events_url": "https://api.github.com/users/pcyin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,484,773,000
1,615,485,985,000
1,615,485,985,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2034", "html_url": "https://github.com/huggingface/datasets/pull/2034", "diff_url": "https://github.com/huggingface/datasets/pull/2034.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2034.patch", "merged_at": 1615485985000 }
Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2034/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2033/comments
https://api.github.com/repos/huggingface/datasets/issues/2033/events
https://github.com/huggingface/datasets/pull/2033
829,295,339
MDExOlB1bGxSZXF1ZXN0NTkwOTgzMDAy
2,033
Raise an error for outdated sacrebleu versions
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,478,880,000
1,615,485,492,000
1,615,485,492,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2033", "html_url": "https://github.com/huggingface/datasets/pull/2033", "diff_url": "https://github.com/huggingface/datasets/pull/2033.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2033.patch", "merged_at": 1615485492000 }
The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12 For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py): ```python def _compute( self, predictions, references, smooth_method="exp", smooth_value=None, force=False, lowercase=False, tokenize=scb.DEFAULT_TOKENIZER, use_effective_order=False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > output = scb.corpus_bleu( sys_stream=predictions, ref_streams=transformed_references, smooth_method=smooth_method, smooth_value=smooth_value, force=force, lowercase=lowercase, tokenize=tokenize, use_effective_order=use_effective_order, ) E TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method' /mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py:114: TypeError ``` I improved the error message when users have an outdated version of sacrebleu. The new error message tells the user to update sacrebleu. cc @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2033/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2032/comments
https://api.github.com/repos/huggingface/datasets/issues/2032/events
https://github.com/huggingface/datasets/issues/2032
829,250,912
MDU6SXNzdWU4MjkyNTA5MTI=
2,032
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false } ]
null
[]
1,615,475,930,000
1,615,483,257,000
null
MEMBER
null
null
null
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time. Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker. I think there are two cases: - if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)` - if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)` The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table. The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask. Feel free to discuss this idea in this thread :) One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle. cc @theo-m @gchhablani related issues: #1796 #1949
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2032/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2031/comments
https://api.github.com/repos/huggingface/datasets/issues/2031/events
https://github.com/huggingface/datasets/issues/2031
829,122,778
MDU6SXNzdWU4MjkxMjI3Nzg=
2,031
wikipedia.py generator that extracts XML doesn't release memory
{ "login": "miyamonz", "id": 6331508, "node_id": "MDQ6VXNlcjYzMzE1MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyamonz", "html_url": "https://github.com/miyamonz", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "repos_url": "https://api.github.com/users/miyamonz/repos", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?", "OK! I'll send it later." ]
1,615,467,084,000
1,616,402,032,000
1,616,402,032,000
CONTRIBUTOR
null
null
null
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502 `root.clear()` intend to clear memory, but it doesn't. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490 https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494 I replaced them with `elem.clear()`, then it seems to work correctly. here is the notebook to reproduce it. https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2031/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2030/comments
https://api.github.com/repos/huggingface/datasets/issues/2030/events
https://github.com/huggingface/datasets/pull/2030
829,110,803
MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4
2,030
Implement Dataset from text
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I am wondering why only one test of \"keep_in_memory=True\" fails, when there are many other tests that test the same and it happens only in pyarrow_1..." ]
1,615,466,090,000
1,616,074,169,000
1,616,074,169,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2030", "html_url": "https://github.com/huggingface/datasets/pull/2030", "diff_url": "https://github.com/huggingface/datasets/pull/2030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2030.patch", "merged_at": 1616074169000 }
Implement `Dataset.from_text`. Analogue to #1943, #1946.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2030/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2029/comments
https://api.github.com/repos/huggingface/datasets/issues/2029/events
https://github.com/huggingface/datasets/issues/2029
829,097,290
MDU6SXNzdWU4MjkwOTcyOTA=
2,029
Loading a faiss index KeyError
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r\n```python\r\ndataset2 = load_from_disk(dataset_filename)\r\n```\r\nwhere `dataset_filename` is the place where you saved you dataset with the embeddings in the first place.", "Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index \r\n\r\nI copy-pasted it here.\r\n\r\n> When you are done with your queries you can save your index on disk:\r\n> \r\n> ```python\r\n> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n> ```\r\n> Then reload it later:\r\n> \r\n> ```python\r\n> ds = load_dataset('crime_and_punish', split='train[:100]')\r\n> ds.load_faiss_index('embeddings', 'my_index.faiss')\r\n> ```", "Hi !\r\n\r\nThe code of the example is valid.\r\nAn index is a search engine, it's not considered a column of a dataset.\r\nWhen you do `ds.load_faiss_index(\"embeddings\", 'my_index.faiss')`, it attaches an index named \"embeddings\" to the dataset but it doesn't re-add the \"embeddings\" column. You can list the indexes of a dataset by using `ds.list_indexes()`.\r\n\r\nIf I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nThis looks misleading indeed, and we should add a note to make it more explicit that it doesn't store the column that was used to build the index.\r\n\r\nFeel free to open a PR to suggest an improvement on the documentation if you want to contribute :)", "> If I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nYes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`\r\n\r\nWhat I learned was\r\n1. column and index are different\r\n2. loading the index does not create a column\r\n3. the column is not needed to be able to use the index\r\n4. RAG needs both the embeddings column and the index\r\n\r\nIf I can come up with a way to articulate this in the right spot in the docs, I'll open a PR" ]
1,615,464,973,000
1,615,508,469,000
1,615,508,469,000
NONE
null
null
null
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (dataset2) with the same text and label information as dataset1 6. Try to load the faiss index from file to dataset2 7. Get `KeyError: "Column embeddings not in the dataset"` I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU. https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing Ubuntu Version VERSION="18.04.5 LTS (Bionic Beaver)" datasets==1.4.1 faiss==1.5.3 faiss-gpu==1.7.0 torch==1.8.0+cu101 transformers==4.3.3 NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 Tesla K80 I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index I included the exact code from the documentation at the end of the notebook to show that they don't work either.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2029/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2028/comments
https://api.github.com/repos/huggingface/datasets/issues/2028/events
https://github.com/huggingface/datasets/pull/2028
828,721,393
MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx
2,028
Adding PersiNLU reading-comprehension
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I think I have addressed all your comments. ", "Thanks! @lhoestq Let me know if you want me to address anything to get this merged. ", "It's all good thanks ;)\r\nmerging" ]
1,615,437,673,000
1,615,801,197,000
1,615,801,197,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2028", "html_url": "https://github.com/huggingface/datasets/pull/2028", "diff_url": "https://github.com/huggingface/datasets/pull/2028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2028.patch", "merged_at": 1615801197000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2028/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2027/comments
https://api.github.com/repos/huggingface/datasets/issues/2027/events
https://github.com/huggingface/datasets/pull/2027
828,490,444
MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1
2,027
Update format columns in Dataset.rename_columns
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,420,259,000
1,615,473,520,000
1,615,473,520,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2027", "html_url": "https://github.com/huggingface/datasets/pull/2027", "diff_url": "https://github.com/huggingface/datasets/pull/2027.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2027.patch", "merged_at": 1615473520000 }
Fixes #2026
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2027/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2026/comments
https://api.github.com/repos/huggingface/datasets/issues/2026/events
https://github.com/huggingface/datasets/issues/2026
828,194,467
MDU6SXNzdWU4MjgxOTQ0Njc=
2,026
KeyError on using map after renaming a column
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new column name which is why this new column is missing in the output.", "Hi @mariosasko,\n\nThanks for opening a PR on this :)\nWhy does the old name also disappear?", "I just merged a @mariosasko 's PR that fixes this issue.\r\nIf it happens again, feel free to re-open :)" ]
1,615,402,457,000
1,615,473,574,000
1,615,473,520,000
CONTRIBUTOR
null
null
null
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])]) def prepare_features(examples): images = [] labels = [] print(examples) for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform(examples["image"][example_idx].permute(2,0,1))) else: images.append(examples["image"][example_idx].permute(2,0,1)) labels.append(examples["label"][example_idx]) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('cifar10') raw_dataset.set_format('torch',columns=['img','label']) raw_dataset = raw_dataset.rename_column('img','image') features = datasets.Features({ "image": datasets.Array3D(shape=(3,32,32),dtype="float32"), "label": datasets.features.ClassLabel(names=[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck", ]), }) train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) ``` The error: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-54-bf29672c53ee> in <module>() 14 ]), 15 }) ---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) 2 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1287 test_inputs = self[:2] if batched else self[0] 1288 test_indices = [0, 1] if batched else 0 -> 1289 update_data = does_function_return_dict(test_inputs, test_indices) 1290 logger.info("Testing finished, running the mapping function on the dataset") 1291 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1259 processed_inputs = ( -> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1261 ) 1262 does_return_dict = isinstance(processed_inputs, Mapping) <ipython-input-52-b4dccbafb70d> in prepare_features(examples) 3 labels = [] 4 print(examples) ----> 5 for example_idx, example in enumerate(examples["image"]): 6 if transform is not None: 7 images.append(transform(examples["image"][example_idx].permute(2,0,1))) KeyError: 'image' ``` The print statement inside returns this: ```python {'label': tensor([6, 9])} ``` Apparently, both `img` and `image` do not exist after renaming. Note that this code works fine with `img` everywhere. Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2026/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2025/comments
https://api.github.com/repos/huggingface/datasets/issues/2025/events
https://github.com/huggingface/datasets/pull/2025
828,047,476
MDExOlB1bGxSZXF1ZXN0NTg5ODk2NjMz
2,025
[Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?\r\n\r\n\r\n\r\np.s one last thing?\r\n\r\nIs there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process. \r\n\r\n", "> There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?\r\n\r\nIn the new save_to_disk, the filename of the arrow file is fixed: `dataset.arrow`.\r\nThis way is will be overwritten if you save your dataset again\r\n\r\n> Is there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process.\r\n\r\nIf you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.\r\nDoes that answer the question ? How does this \"pointer\" behavior manifest exactly on your side ?", "Apparently the usage of the compute layer of pyarrow requires pyarrow>=1.0.0 (otherwise there are some issues on windows with file permissions when doing dataset concatenation).\r\n\r\nI'll bump the pyarrow requirement from, 0.17.1 to 1.0.0", "\r\n> If you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.\r\n> Does that answer the question? How does this \"pointer\" behavior manifest exactly on your side?\r\n\r\nYes, I checked this behavior.. if we update the .arrow file it kind of flushes out the previous one. So your solution is perfect <3. ", "Sorry for spamming, there's a a bug that only happens on the CI so I have to re-run it several times", "Alright I finally added all the tests I wanted !\r\nI also fixed all the bugs and now all the tests are passing :)\r\n\r\nLet me know if you have comments.\r\n\r\nI also noticed that two methods in pyarrow seem to bring some data in memory even for a memory mapped table: filter and cast:\r\n- for filter I took a look at the C++ code on the arrow's side and found [this part](https://github.com/apache/arrow/blob/55c8d74d5556b25238fb2028e9fb97290ea24684/cpp/src/arrow/compute/kernels/vector_selection.cc#L93-L160) that \"builds\" the array during filter. It seems to indicate that it allocates new memory for the filtered array but not 100% sure.\r\n- regarding cast I noticed that it happens when changing the precision of an array of integers. Not sure if there are other cases.\r\n\r\n\r\nMaybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory.", "> Maybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory.\r\n\r\nI'm a bit unclear on this, I thought the point of the refactor was to use `Table.filter` to speed up our own `.filter` and stop using `.map` that offloaded too much stuff on disk. \r\nAt some point I recall we decided to use `keep_in_memory=True` as the expectations were that it would be hard to fill the memory?", "> I'm a bit unclear on this, I thought the point of the refactor was to use Table.filter to speed up our own .filter and stop using .map that offloaded too much stuff on disk.\r\n> At some point I recall we decided to use keep_in_memory=True as the expectations were that it would be hard to fill the memory?\r\n\r\nYes it's ok to have the mask in memory, but not the full table. I was not aware that the table returned by filter could actually be in memory (it's not part of the pyarrow documentation afaik).\r\nTo be more specific I noticed that every time you call `filter`, the pyarrow total allocated memory increases.\r\nI haven't checked on a big dataset though, but it would be nice to see how much memory it uses with respect to the size of the dataset.", "I have addressed your comments @theo-m @albertvillanova ! Thanks for the suggestions", "I totally agree with you. I would have loved to use inheritance instead.\r\nHowever because `pa.Table` is a cython class without proper initialization methods (you can't call `__init__` for example): you can't instantiate a subclass of `pa.Table` in python.\r\nTo be more specific, you actually can try to instantiate a subclass of `pa.Table` with no data BUT this is not a valid table so you get an error.\r\nAnd since `pa.Table` objects are immutable you can't even set the data in `__new__` or `__init__`.\r\n\r\nEDIT: one could make a new cython class that inherits from `pa.Table` with proper initialization methods, so that we can inherit from this class instead in python. We can do that in the future if we plan to use cython in `datasets`.\r\n(see: https://arrow.apache.org/docs/python/extending.html)", "@lhoestq, but in which cases you would like to instantiate directly either `InMemoryTable` or `MemoryMappedTable`? You normally use one of their `from_xxx` class methods...", "Yes I was thinking of these cases. The issue is that they return `pa.Table` objects even from a subclass of `pa.Table`", "That is indeed a weird behavior...", "I guess that in this case, the best approach is as you did, using composition over inheritance...\r\n\r\nhttps://github.com/apache/arrow/pull/5322", "@lhoestq I think you forgot to add the new classes to the docs?", "Yes you're right, let me add them" ]
1,615,395,647,000
1,617,115,613,000
1,616,777,519,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2025", "html_url": "https://github.com/huggingface/datasets/pull/2025", "diff_url": "https://github.com/huggingface/datasets/pull/2025.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2025.patch", "merged_at": 1616777518000 }
## Intro Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling ## Issues Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk. Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk. ## Solution provided in this PR I changed this by allowing several types of Table to be used in the Dataset object. More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable. The in-memory and memory-mapped tables implement the pickling behavior described above. The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks. ## Implementation details The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table. Regarding the MemoryMappedTable: Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk. ## Checklist - [x] add InMemoryTable - [x] add MemoryMappedTable - [x] add ConcatenationTable - [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter - [x] Update Dataset.from_xxx methods - [x] Update load_from_disk and save_to_disk - [x] Backward compatibility of load_from_disk - [x] Add tests for the new tables - [x] Update current tests - [ ] Documentation ---------- I would be happy to discuss the design of this PR :) Close #1877
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2025/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2025/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2024/comments
https://api.github.com/repos/huggingface/datasets/issues/2024/events
https://github.com/huggingface/datasets/pull/2024
827,842,962
MDExOlB1bGxSZXF1ZXN0NTg5NzEzNDAy
2,024
Remove print statement from mnist.py
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for noticing !\r\n#2020 fixed this earlier today though ^^'\r\n\r\nClosing this one" ]
1,615,387,198,000
1,615,485,832,000
1,615,485,831,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2024", "html_url": "https://github.com/huggingface/datasets/pull/2024", "diff_url": "https://github.com/huggingface/datasets/pull/2024.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2024.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2024/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2023
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2023/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2023/comments
https://api.github.com/repos/huggingface/datasets/issues/2023/events
https://github.com/huggingface/datasets/pull/2023
827,819,608
MDExOlB1bGxSZXF1ZXN0NTg5NjkyNDU2
2,023
Add Romanian to XQuAD
{ "login": "M-Salti", "id": 9285264, "node_id": "MDQ6VXNlcjkyODUyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M-Salti", "html_url": "https://github.com/M-Salti", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "repos_url": "https://api.github.com/users/M-Salti/repos", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for updating XQUAD :)\r\n\r\nThe slow test is failing though since there's no dummy data nor metadata in dataset_infos.json for the romanian configuration.\r\n\r\nCould you please generate the dummy data with\r\n```\r\ndatasets-cli dummy_data ./datasets/xquad --auto_generate --json_field data\r\n```\r\nThis will update all the dummy data files, and also add the new one for the romanian configuration.\r\n\r\n\r\nYou can also update the metadata with\r\n```\r\ndatasets-cli test ./datasets/xquad --name xquad.ro --save_infos\r\n```\r\nThis will update the dataset_infos.json file with the metadata of the romanian config :)\r\n\r\nThanks in advance !", "Hello Quentin, and thanks for your help.\r\n\r\nI found that running\r\n\r\n```python\r\ndatasets-cli test ./datasets/xquad --name xquad.ro --save_infos\r\n```\r\n\r\nwas not enough to pass the slow tests, because it was not adding the new `xquad.ro.json` checksum to the other configs infos and becuase of that an `UnexpectedDownloadedFile` error was being thrown, so instead I used:\r\n\r\n```python\r\ndatasets-cli test ./datasets/xquad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n`--ignore_verifications` was necessary to bypass the same `UnexpectedDownloadedFile` error.\r\n\r\nAdditionally, I deleted `dummy_data_copy.zip` and the `copy.sh` script because they both seem now unnecessary.\r\n\r\nThe slow tests for both the real and dummy data now pass successfully, so I hope that I didn't mess anything up :)\r\n", "You're right, you needed the `--ignore_verifications` flag !\r\nThanks for updating them :)\r\n\r\nAlthough I just noticed that the new dummy_data.zip files are quite big (170KB each) because they contain the json files of all the languages, while only one json file per language is necessary. Could you remove the unnecessary json files to reduce the size of the dummy_data.zip files if you don't mind ?", "Done. I created a script (`remove_unnecessary_langs.sh`) to automate the process.\r\n" ]
1,615,386,272,000
1,615,802,897,000
1,615,802,897,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2023", "html_url": "https://github.com/huggingface/datasets/pull/2023", "diff_url": "https://github.com/huggingface/datasets/pull/2023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2023.patch", "merged_at": 1615802897000 }
On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2023/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2022/comments
https://api.github.com/repos/huggingface/datasets/issues/2022/events
https://github.com/huggingface/datasets/issues/2022
827,435,033
MDU6SXNzdWU4Mjc0MzUwMzM=
2,022
ValueError when rename_column on splitted dataset
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use the named splits API (for now):\r\n```python\r\ntrain_ds, test_ds = load_dataset(\r\n path='csv', \r\n delimiter='\\t', \r\n data_files=text_files, \r\n split=['train[:90%]', 'train[-10%:]'],\r\n)\r\n\r\ntrain_ds = train_ds.rename_column('sentence', 'text')\r\n```", "This has been fixed in #2043 , thanks @mariosasko \r\nThe fix is available on master and we'll do a new release soon :)\r\n\r\nfeel free to re-open if you still have issues" ]
1,615,369,238,000
1,615,903,568,000
1,615,903,505,000
NONE
null
null
null
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_dataset( path='csv', # use 'text' loading script to load from local txt-files delimiter='\t', # xxx data_files=text_files, # list of paths to local text files split=split, # xxx ) dataset ``` Part of output: ```python DatasetDict({ train: Dataset({ features: ['sentence', 'sentiment'], num_rows: 900 }) test: Dataset({ features: ['sentence', 'sentiment'], num_rows: 100 }) }) ``` Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however: ```python dataset['train'].rename_column('sentence', 'text') ``` ```python /usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name) 353 for split_name in split_names_from_instruction: 354 if not re.match(_split_re, split_name): --> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.") 356 357 def __str__(self): ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('. ``` In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split. Thanks in advance! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2022/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2021/comments
https://api.github.com/repos/huggingface/datasets/issues/2021/events
https://github.com/huggingface/datasets/issues/2021
826,988,016
MDU6SXNzdWU4MjY5ODgwMTY=
2,021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching." ]
1,615,344,514,000
1,615,630,061,000
1,615,630,061,000
NONE
null
null
null
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a serious issue with cashing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2021/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2020/comments
https://api.github.com/repos/huggingface/datasets/issues/2020/events
https://github.com/huggingface/datasets/pull/2020
826,961,126
MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx
2,020
Remove unnecessary docstart check in conll-like datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,342,816,000
1,615,469,617,000
1,615,469,617,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2020", "html_url": "https://github.com/huggingface/datasets/pull/2020", "diff_url": "https://github.com/huggingface/datasets/pull/2020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2020.patch", "merged_at": 1615469617000 }
Related to this PR: #1998 Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2020/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2019
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2019/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2019/comments
https://api.github.com/repos/huggingface/datasets/issues/2019/events
https://github.com/huggingface/datasets/pull/2019
826,625,706
MDExOlB1bGxSZXF1ZXN0NTg4NjEyODgy
2,019
Replace print with logging in dataset scripts
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Maybe a script or even a test in `test_dataset_common.py` that verifies that a dataset script meets some set of quality standards (print calls and todos from the dataset script template are not present, etc.) could be added?", "Yes definitely !" ]
1,615,323,574,000
1,615,543,741,000
1,615,479,259,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2019", "html_url": "https://github.com/huggingface/datasets/pull/2019", "diff_url": "https://github.com/huggingface/datasets/pull/2019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2019.patch", "merged_at": 1615479258000 }
Replaces `print(...)` in the dataset scripts with the library logger.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2019/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2018/comments
https://api.github.com/repos/huggingface/datasets/issues/2018/events
https://github.com/huggingface/datasets/pull/2018
826,473,764
MDExOlB1bGxSZXF1ZXN0NTg4NDc0NTQz
2,018
Md gender card update
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Link to the card: https://github.com/mcmillanmajora/datasets/blob/md-gender-card/datasets/md_gender_bias/README.md", "dataset card* @sgugger :p ", "Ahah that's what I wanted to say @lhoestq, thanks for fixing. Not used to review the Datasets side ;-)" ]
1,615,316,240,000
1,615,570,260,000
1,615,570,260,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2018", "html_url": "https://github.com/huggingface/datasets/pull/2018", "diff_url": "https://github.com/huggingface/datasets/pull/2018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2018.patch", "merged_at": 1615570260000 }
I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I'll contact the authors to see if they have any additional information or suggested changes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2018/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2017/comments
https://api.github.com/repos/huggingface/datasets/issues/2017/events
https://github.com/huggingface/datasets/pull/2017
826,428,578
MDExOlB1bGxSZXF1ZXN0NTg4NDMyNDc2
2,017
Add TF-based Features to handle different modes of data
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,314,592,000
1,615,984,328,000
1,615,984,327,000
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2017", "html_url": "https://github.com/huggingface/datasets/pull/2017", "diff_url": "https://github.com/huggingface/datasets/pull/2017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2017.patch", "merged_at": null }
Hi, I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2017/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2017/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2016/comments
https://api.github.com/repos/huggingface/datasets/issues/2016/events
https://github.com/huggingface/datasets/pull/2016
825,965,493
MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz
2,016
Not all languages have 2 digit codes.
{ "login": "asiddhant", "id": 13891775, "node_id": "MDQ6VXNlcjEzODkxNzc1", "avatar_url": "https://avatars.githubusercontent.com/u/13891775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asiddhant", "html_url": "https://github.com/asiddhant", "followers_url": "https://api.github.com/users/asiddhant/followers", "following_url": "https://api.github.com/users/asiddhant/following{/other_user}", "gists_url": "https://api.github.com/users/asiddhant/gists{/gist_id}", "starred_url": "https://api.github.com/users/asiddhant/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asiddhant/subscriptions", "organizations_url": "https://api.github.com/users/asiddhant/orgs", "repos_url": "https://api.github.com/users/asiddhant/repos", "events_url": "https://api.github.com/users/asiddhant/events{/privacy}", "received_events_url": "https://api.github.com/users/asiddhant/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,298,019,000
1,615,485,663,000
1,615,485,663,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2016", "html_url": "https://github.com/huggingface/datasets/pull/2016", "diff_url": "https://github.com/huggingface/datasets/pull/2016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2016.patch", "merged_at": 1615485663000 }
.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2016/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2015/comments
https://api.github.com/repos/huggingface/datasets/issues/2015/events
https://github.com/huggingface/datasets/pull/2015
825,942,108
MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0
2,015
Fix ipython function creation in tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,297,019,000
1,615,298,764,000
1,615,298,763,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2015", "html_url": "https://github.com/huggingface/datasets/pull/2015", "diff_url": "https://github.com/huggingface/datasets/pull/2015.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2015.patch", "merged_at": 1615298763000 }
The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created. Fix #2010
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2015/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2014/comments
https://api.github.com/repos/huggingface/datasets/issues/2014/events
https://github.com/huggingface/datasets/pull/2014
825,916,531
MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3
2,014
more explicit method parameters
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,295,909,000
1,615,370,917,000
1,615,370,916,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2014", "html_url": "https://github.com/huggingface/datasets/pull/2014", "diff_url": "https://github.com/huggingface/datasets/pull/2014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2014.patch", "merged_at": 1615370916000 }
re: #2009 not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2014/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2013/comments
https://api.github.com/repos/huggingface/datasets/issues/2013/events
https://github.com/huggingface/datasets/pull/2013
825,694,305
MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx
2,013
Add Cryptonite dataset
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,285,931,000
1,615,318,027,000
1,615,318,026,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2013", "html_url": "https://github.com/huggingface/datasets/pull/2013", "diff_url": "https://github.com/huggingface/datasets/pull/2013.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2013.patch", "merged_at": 1615318026000 }
cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2013/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2012/comments
https://api.github.com/repos/huggingface/datasets/issues/2012/events
https://github.com/huggingface/datasets/issues/2012
825,634,064
MDU6SXNzdWU4MjU2MzQwNjQ=
2,012
No upstream branch
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L10-L14", "~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo 🤡 " ]
1,615,283,335,000
1,615,289,611,000
1,615,289,611,000
CONTRIBUTOR
null
null
null
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2012/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2011/comments
https://api.github.com/repos/huggingface/datasets/issues/2011/events
https://github.com/huggingface/datasets/pull/2011
825,621,952
MDExOlB1bGxSZXF1ZXN0NTg3Njk4MTAx
2,011
Add RoSent Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,282,808,000
1,615,485,652,000
1,615,485,652,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2011", "html_url": "https://github.com/huggingface/datasets/pull/2011", "diff_url": "https://github.com/huggingface/datasets/pull/2011.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2011.patch", "merged_at": 1615485652000 }
This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529. I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique. Let me know in case of any issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2011/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2010/comments
https://api.github.com/repos/huggingface/datasets/issues/2010/events
https://github.com/huggingface/datasets/issues/2010
825,567,635
MDU6SXNzdWU4MjU1Njc2MzU=
2,010
Local testing fails
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm not able to reproduce on my side.\r\nCan you provide the full stacktrace please ?\r\nWhat version of `python` and `dill` do you have ? Which OS are you using ?", "```\r\nco_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0]\r\n \r\n def create_ipython_func(co_filename, returned_obj):\r\n def func():\r\n return returned_obj\r\n \r\n code = func.__code__\r\n> code = CodeType(*[getattr(code, k) if k != \"co_filename\" else co_filename for k in code_args])\r\nE TypeError: an integer is required (got type bytes)\r\n\r\ntests/test_caching.py:152: TypeError\r\n```\r\n\r\nPython 3.8.8 \r\ndill==0.3.1.1\r\n", "I managed to reproduce. This comes from the CodeType init signature that is different in python 3.8.8\r\nI opened a PR to fix this test\r\nThanks !" ]
1,615,280,498,000
1,615,298,763,000
1,615,298,763,000
CONTRIBUTOR
null
null
null
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes) 1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04) ``` Seems like a discrepancy with CI, perhaps a lib version that's not controlled? Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2010/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2009/comments
https://api.github.com/repos/huggingface/datasets/issues/2009/events
https://github.com/huggingface/datasets/issues/2009
825,541,366
MDU6SXNzdWU4MjU1NDEzNjY=
2,009
Ambiguous documentation
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @theo-m !\r\n\r\nA few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects:\r\n\r\n```python\r\ndatasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": os.path.join(data_dir, \"dev.jsonl\"),\r\n \"split\": \"dev\",\r\n },\r\n),\r\n```\r\n\r\nNotice the `gen_kwargs` argument passed to the constructor of `SplitGenerator`: this dict will be unpacked as keyword arguments to pass to the `_generat_examples` method (in this case the `filepath` and `split` arguments).\r\n\r\nLet me know if that helps!", "Oh ok I hadn't made the connection between those two, will offer a tweak to the comment and the template then - thanks!" ]
1,615,279,331,000
1,615,561,294,000
1,615,561,294,000
CONTRIBUTOR
null
null
null
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR with a clearer statement when I understand the meaning.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2009/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2008/comments
https://api.github.com/repos/huggingface/datasets/issues/2008/events
https://github.com/huggingface/datasets/pull/2008
825,153,804
MDExOlB1bGxSZXF1ZXN0NTg3Mjc1Njk4
2,008
Fix various typos/grammer in the docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "What do yo think of the documentation btw ?\r\nWhat parts would you like to see improved ?", "I like how concise and straightforward the docs are.\r\n\r\nFew things that would further improve the docs IMO:\r\n* the usage example of `Dataset.formatted_as` in https://huggingface.co/docs/datasets/master/processing.html\r\n* the \"Open in Colab\" button would be nice where it makes sense (we can borrow this from the transformers project + link to HF Forum)" ]
1,615,253,968,000
1,615,833,769,000
1,615,285,292,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2008", "html_url": "https://github.com/huggingface/datasets/pull/2008", "diff_url": "https://github.com/huggingface/datasets/pull/2008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2008.patch", "merged_at": 1615285292000 }
This PR: * fixes various typos/grammer I came across while reading the docs * adds the "Install with conda" installation instructions Closes #1959
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2008/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2007/comments
https://api.github.com/repos/huggingface/datasets/issues/2007/events
https://github.com/huggingface/datasets/issues/2007
824,518,158
MDU6SXNzdWU4MjQ1MTgxNTg=
2,007
How to not load huggingface datasets into memory
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ", "The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM.\r\n\r\nThe only thing that's loaded into memory during training is the batch used in the training step.\r\nSo as long as your model works with batch_size = X, then you can load an even bigger dataset and it will work as well with the same batch_size.\r\n\r\nNote that you still have to take into account that some batches take more memory than others, depending on the texts lengths. If it works for a batch with batch_size = X and with texts of maximum length, then it will work for all batches.\r\n\r\nIn your case I guess that there are a few long sentences in the dataset. For those long sentences you get a memory error on your GPU because they're too long. By passing `max_train_samples` you may have taken a subset of the dataset that only contain short sentences. That's probably why in your case it worked only when you set `max_train_samples`.\r\nI'd suggest you to reduce the batch size so that the batches with long sentences can be loaded on the GPU.\r\n\r\nLet me know if that helps or if you have other questions" ]
1,615,206,926,000
1,628,100,145,000
1,628,100,145,000
NONE
null
null
null
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir (Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory. I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size? In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set. thank you so much @lhoestq for your great help in advance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2007/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2006/comments
https://api.github.com/repos/huggingface/datasets/issues/2006/events
https://github.com/huggingface/datasets/pull/2006
824,457,794
MDExOlB1bGxSZXF1ZXN0NTg2Njg5Nzk2
2,006
Don't gitignore dvc.lock
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,201,988,000
1,615,202,915,000
1,615,202,914,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2006", "html_url": "https://github.com/huggingface/datasets/pull/2006", "diff_url": "https://github.com/huggingface/datasets/pull/2006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2006.patch", "merged_at": 1615202914000 }
The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of ``` ERROR: 'dvc.lock' is git-ignored. ``` I removed the dvc.lock file from the gitignore to fix that
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2006/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2005/comments
https://api.github.com/repos/huggingface/datasets/issues/2005/events
https://github.com/huggingface/datasets/issues/2005
824,275,035
MDU6SXNzdWU4MjQyNzUwMzU=
2,005
Setting to torch format not working with torchvision and MNIST
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with batch size 2, I get an output like this for the `image`:\r\n\r\n```\r\n[[tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor...\r\n```\r\nFor `label`, it works fine:\r\n```\r\ntensor([7, 6])\r\n```\r\nNote that I didn't specify conversion to torch tensors anywhere.\r\n\r\nBasically, there are two problems here:\r\n1. `dataset.map` doesn't return tensor type objects, even though it uses the transforms, the grayscale conversion in transform was done, but the output was lists only.\r\n2. The `DataLoader` performs its own conversion, which may be not desired.\r\n\r\nI understand that we can't change `DataLoader` because it is a torch functionality, however, is there a way we can handle image data to allow using it with torch `DataLoader` and `torchvision` properly?\r\n\r\nI think if the `image` was a torch tensor (N,H,W,C), or a list of torch tensors (H,W,C), before it is passed to `DataLoader`, then we might not face this issue. ", "What's the feature types of your new dataset after `.map` ?\r\n\r\nCan you try with adding `features=` in the `.map` call in order to set the \"image\" feature type to `Array2D` ?\r\nThe default feature type is lists of lists, we've not implemented shape verification to use ArrayXD instead of nested lists yet", "Hi @lhoestq\r\n\r\nRaw feature types are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000 #(type, len)\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'int'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nInside the `prepare_feature` method with batch size 100000 , after processing, they are like this:\r\n\r\nInside Prepare Train Features\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter map, the feature type are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\n\r\nAfter dataloader with batch size 2, the batch features are like this:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n<hr>\r\n\r\nWhen I was setting the format of `train_dataset` to 'torch' after mapping - \r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nCorresponding DataLoader batch:\r\n```\r\nFrom DataLoader batch features\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nI will check with features and get back.\r\n\r\n\r\n\r\n", "Hi @lhoestq\r\n\r\n# Using Array3D\r\nI tried this:\r\n```python\r\nfeatures = datasets.Features({\r\n \"image\": datasets.Array3D(shape=(1,28,28),dtype=\"float32\"),\r\n \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n })\r\ntrain_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n```\r\nand it didn't fix the issue.\r\n\r\nDuring the `prepare_train_features:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter the `map`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nFrom the DataLoader batch:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\nIt is the same as before.\r\n\r\n---\r\n\r\nUsing `datasets.Sequence(datasets.Array2D(shape=(28,28),dtype=\"float32\"))` gave an error during `map`:\r\n\r\n```python\r\nArrowNotImplementedError Traceback (most recent call last)\r\n<ipython-input-95-d28e69289084> in <module>()\r\n 3 \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n 4 })\r\n----> 5 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n\r\n15 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in <dictcomp>(.0)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1307 fn_kwargs=fn_kwargs,\r\n 1308 new_fingerprint=new_fingerprint,\r\n-> 1309 update_data=update_data,\r\n 1310 )\r\n 1311 else:\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 202 }\r\n 203 # apply actual function\r\n--> 204 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 205 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 206 # re-apply format to the output\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)\r\n 335 # Call actual function\r\n 336 \r\n--> 337 out = func(self, *args, **kwargs)\r\n 338 \r\n 339 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)\r\n 1580 if update_data:\r\n 1581 batch = cast_to_python_objects(batch)\r\n-> 1582 writer.write_batch(batch)\r\n 1583 if update_data:\r\n 1584 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 274 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)\r\n 275 typed_sequence_examples[col] = typed_sequence\r\n--> 276 pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n 277 self.write_table(pa_table, writer_batch_size)\r\n 278 \r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)\r\n 95 out = pa.ExtensionArray.from_storage(type, pa.array(self.data, type.storage_dtype))\r\n 96 else:\r\n---> 97 out = pa.array(self.data, type=type)\r\n 98 if trying_type and out[0].as_py() != self.data[0]:\r\n 99 raise TypeError(\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: extension\r\n```", "# Convert raw tensors to torch format\r\nStrangely, converting to torch tensors works perfectly on `raw_dataset`:\r\n```python\r\nraw_dataset.set_format('torch',columns=['image','label'])\r\n```\r\nTypes:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nUsing this for transforms:\r\n```python\r\ndef prepare_features(examples):\r\n images = []\r\n labels = []\r\n for example_idx, example in enumerate(examples[\"image\"]):\r\n if transform is not None:\r\n images.append(transform(\r\n examples[\"image\"][example_idx].numpy()\r\n ))\r\n else:\r\n images.append(examples[\"image\"][example_idx].numpy())\r\n labels.append(examples[\"label\"][example_idx])\r\n output = {\"label\":labels, \"image\":images}\r\n return output\r\n```\r\n\r\nInside `prepare_train_features`:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batch:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n\r\n## Using `torch` format:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batches:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n## Using the features - `Array3D`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter DataLoader `batch`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nThe last one works perfectly.\r\n\r\n![image](https://user-images.githubusercontent.com/29076344/110491452-4cf09c00-8117-11eb-8a47-73bf3fc0c3dc.png)\r\n\r\nI wonder why this worked, and others didn't.\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "Concluding, the way it works right now is:\r\n\r\n1. Converting raw dataset to `torch` format.\r\n2. Use the transform and apply using `map`, ensure the returned values are tensors. \r\n3. When mapping, use `features` with `image` being `Array3D` type.", "What the dataset returns depends on the feature type.\r\nFor a feature type that is Sequence(Sequence(Sequence(Value(\"uint8\")))), a dataset formatted as \"torch\" return lists of lists of tensors. This is because the lists lengths may vary.\r\nFor a feature type that is Array3D on the other hand it returns one tensor. This is because the size of the tensor is fixed and defined bu the Array3D type.", "Okay, that makes sense.\r\nRaw images are list of Array2D, hence we get a single tensor when `set_format` is used. But, why should I need to convert the raw images to `torch` format when `map` does this internally?\r\n\r\nUsing `Array3D` did not work with `map` when raw images weren't `set_format`ted to torch type.", "I understand that `map` needs to know what kind of output tensors are expected, and thus converting the raw dataset to `torch` format is necessary. Closing the issue since it is resolved." ]
1,615,189,091,000
1,615,312,693,000
1,615,312,693,000
CONTRIBUTOR
null
null
null
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labels = [] for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform( np.array(examples["image"][example_idx], dtype=np.uint8) )) else: images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8))) labels.append(torch.tensor(examples["label"][example_idx])) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('mnist') train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000) train_dataset.set_format("torch",columns=["image","label"]) ``` After this, I check the type of the following: ```python print(type(train_dataset["train"]["label"])) print(type(train_dataset["train"]["image"][0])) ``` This leads to the following output: ```python <class 'torch.Tensor'> <class 'list'> ``` I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`. I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue? Thanks, Gunjan EDIT: I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28). EDIT 2: Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2005/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2004/comments
https://api.github.com/repos/huggingface/datasets/issues/2004/events
https://github.com/huggingface/datasets/pull/2004
824,080,760
MDExOlB1bGxSZXF1ZXN0NTg2MzcyODY1
2,004
LaRoSeDa
{ "login": "MihaelaGaman", "id": 6823177, "node_id": "MDQ6VXNlcjY4MjMxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MihaelaGaman", "html_url": "https://github.com/MihaelaGaman", "followers_url": "https://api.github.com/users/MihaelaGaman/followers", "following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}", "gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions", "organizations_url": "https://api.github.com/users/MihaelaGaman/orgs", "repos_url": "https://api.github.com/users/MihaelaGaman/repos", "events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}", "received_events_url": "https://api.github.com/users/MihaelaGaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq all the changes requested are implemented. Thank you for your time and feedback :)" ]
1,615,165,592,000
1,615,977,800,000
1,615,977,800,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2004", "html_url": "https://github.com/huggingface/datasets/pull/2004", "diff_url": "https://github.com/huggingface/datasets/pull/2004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2004.patch", "merged_at": 1615977800000 }
Add LaRoSeDa to huggingface datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2004/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2003/comments
https://api.github.com/repos/huggingface/datasets/issues/2003/events
https://github.com/huggingface/datasets/issues/2003
824,034,678
MDU6SXNzdWU4MjQwMzQ2Nzg=
2,003
Messages are being printed to the `stdout`
{ "login": "mahnerak", "id": 1367529, "node_id": "MDQ6VXNlcjEzNjc1Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mahnerak", "html_url": "https://github.com/mahnerak", "followers_url": "https://api.github.com/users/mahnerak/followers", "following_url": "https://api.github.com/users/mahnerak/following{/other_user}", "gists_url": "https://api.github.com/users/mahnerak/gists{/gist_id}", "starred_url": "https://api.github.com/users/mahnerak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mahnerak/subscriptions", "organizations_url": "https://api.github.com/users/mahnerak/orgs", "repos_url": "https://api.github.com/users/mahnerak/repos", "events_url": "https://api.github.com/users/mahnerak/events{/privacy}", "received_events_url": "https://api.github.com/users/mahnerak/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This is expected to show this message to the user via stdout.\r\nThis way the users see it directly and can cancel the downloading if they want to.\r\nCould you elaborate why it would be better to have it in stderr instead of stdout ?", "@lhoestq, sorry for the late reply\r\n\r\nI completely understand why you decided to output a message that is always shown. The only problem is that the message is printed to the `stdout`. For example, if the user runs `python run_glue.py > log_file`, it will redirect `stdout` to the file named `log_file`, and the message will not be shown to the user.\r\n\r\nInstead, we should print this message to `stderr`. Even in the case of `python run_glue.py > log_file` only `stdout` is being redirected and so the message is always shown." ]
1,615,154,974,000
1,615,830,467,000
null
NONE
null
null
null
In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`. In my opinion, this kind of messages should never printed to the stdout. At least some configuration/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2003/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2002/comments
https://api.github.com/repos/huggingface/datasets/issues/2002/events
https://github.com/huggingface/datasets/pull/2002
823,955,744
MDExOlB1bGxSZXF1ZXN0NTg2MjgwNzE3
2,002
MOROCO
{ "login": "MihaelaGaman", "id": 6823177, "node_id": "MDQ6VXNlcjY4MjMxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MihaelaGaman", "html_url": "https://github.com/MihaelaGaman", "followers_url": "https://api.github.com/users/MihaelaGaman/followers", "following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}", "gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions", "organizations_url": "https://api.github.com/users/MihaelaGaman/orgs", "repos_url": "https://api.github.com/users/MihaelaGaman/repos", "events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}", "received_events_url": "https://api.github.com/users/MihaelaGaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Thank you for all the feedback. I've added the suggested changes in my last commit." ]
1,615,134,137,000
1,616,147,526,000
1,616,147,526,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2002", "html_url": "https://github.com/huggingface/datasets/pull/2002", "diff_url": "https://github.com/huggingface/datasets/pull/2002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2002.patch", "merged_at": 1616147526000 }
Add MOROCO to huggingface datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2002/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2001/comments
https://api.github.com/repos/huggingface/datasets/issues/2001/events
https://github.com/huggingface/datasets/issues/2001
823,946,706
MDU6SXNzdWU4MjM5NDY3MDY=
2,001
Empty evidence document ("provenance") in KILT ELI5 dataset
{ "login": "donggyukimc", "id": 16605764, "node_id": "MDQ6VXNlcjE2NjA1NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donggyukimc", "html_url": "https://github.com/donggyukimc", "followers_url": "https://api.github.com/users/donggyukimc/followers", "following_url": "https://api.github.com/users/donggyukimc/following{/other_user}", "gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions", "organizations_url": "https://api.github.com/users/donggyukimc/orgs", "repos_url": "https://api.github.com/users/donggyukimc/repos", "events_url": "https://api.github.com/users/donggyukimc/events{/privacy}", "received_events_url": "https://api.github.com/users/donggyukimc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,615,131,695,000
1,615,960,261,000
1,615,960,261,000
NONE
null
null
null
In the original KILT benchmark(https://github.com/facebookresearch/KILT), all samples has its evidence document (i.e. wikipedia page id) for prediction. For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this `{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}` However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance. `{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]} ` should i perform other procedure to obtain evidence documents?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2001/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2000/comments
https://api.github.com/repos/huggingface/datasets/issues/2000/events
https://github.com/huggingface/datasets/issues/2000
823,899,910
MDU6SXNzdWU4MjM4OTk5MTA=
2,000
Windows Permission Error (most recent version of datasets)
{ "login": "itsLuisa", "id": 73881148, "node_id": "MDQ6VXNlcjczODgxMTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/73881148?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itsLuisa", "html_url": "https://github.com/itsLuisa", "followers_url": "https://api.github.com/users/itsLuisa/followers", "following_url": "https://api.github.com/users/itsLuisa/following{/other_user}", "gists_url": "https://api.github.com/users/itsLuisa/gists{/gist_id}", "starred_url": "https://api.github.com/users/itsLuisa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itsLuisa/subscriptions", "organizations_url": "https://api.github.com/users/itsLuisa/orgs", "repos_url": "https://api.github.com/users/itsLuisa/repos", "events_url": "https://api.github.com/users/itsLuisa/events{/privacy}", "received_events_url": "https://api.github.com/users/itsLuisa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @itsLuisa !\r\n\r\nCould you give us more information about the error you're getting, please?\r\nA copy-paste of the Traceback would be nice to get a better understanding of what is wrong :) ", "Hello @SBrandeis , this is it:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 537, in incomplete_dir\r\n yield tmp_dir\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 578, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 656, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 982, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 297, in finalize\r\n self.write_on_file()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 230, in write_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow\\array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 97, in __arrow_array__\r\n out = pa.array(self.data, type=type)\r\n File \"pyarrow\\array.pxi\", line 305, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\\error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\\error.pxi\", line 107, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Expected bytes, got a 'list' object\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/Luisa/Documents/Uni/WS 2020,21/Neural Networks/Final_Project/NN_Project/data_loading.py\", line 122, in <module>\r\n main()\r\n File \"C:/Users/Luisa/Documents/Uni/WS 2020,21/Neural Networks/Final_Project/NN_Project/data_loading.py\", line 111, in main\r\n dataset = datasets.load_dataset(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 586, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 543, in incomplete_dir\r\n shutil.rmtree(tmp_dir)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 616, in _rmtree_unsafe\r\n os.unlink(fullname)\r\nPermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird: 'C:\\\\Users\\\\Luisa\\\\.cache\\\\huggingface\\\\datasets\\\\sample\\\\default-20ee7d51a6a9454f\\\\0.0.0\\\\5fc4c3a355ea77ab446bd31fca5082437600b8364d29b2b95264048bd1f398b1.incomplete\\\\sample-train.arrow'\r\n\r\nProcess finished with exit code 1\r\n```", "Hi @itsLuisa, thanks for sharing the Traceback.\r\n\r\nYou are defining the \"id\" field as a `string` feature:\r\n```python\r\nclass Sample(datasets.GeneratorBasedBuilder):\r\n ...\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n # ^^ here\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"pos_tags\": datasets.Sequence(datasets.features.ClassLabel(names=[...])),\r\n[...]\r\n```\r\n\r\nBut in the `_generate_examples`, the \"id\" field is a list:\r\n```python\r\nids = list()\r\n```\r\n\r\nChanging:\r\n```python\r\n\"id\": datasets.Value(\"string\"),\r\n```\r\nInto:\r\n```python\r\n\"id\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\nShould fix your issue.\r\n\r\nLet me know if this helps!", "It seems to be working now, thanks a lot for the help, @SBrandeis !", "Glad to hear it!\r\nI'm closing the issue" ]
1,615,118,128,000
1,615,293,777,000
1,615,293,777,000
NONE
null
null
null
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance! Luisa My script: ``` import datasets import csv logger = datasets.logging.get_logger(__name__) class SampleConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(SampleConfig, self).__init__(**kwargs) class Sample(datasets.GeneratorBasedBuilder): BUILDER_CONFIGS = [ SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"), ] def _info(self): return datasets.DatasetInfo( description="Dataset with words and their POS-Tags", features=datasets.Features( { "id": datasets.Value("string"), "tokens": datasets.Sequence(datasets.Value("string")), "pos_tags": datasets.Sequence( datasets.features.ClassLabel( names=[ "''", ",", "-LRB-", "-RRB-", ".", ":", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "MD", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WRB", "``" ] ) ), } ), supervised_keys=None, homepage="https://catalog.ldc.upenn.edu/LDC2011T03", citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.", ) def _split_generators(self, dl_manager): loaded_files = dl_manager.download_and_extract(self.config.data_files) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}), datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]}) ] def _generate_examples(self, filepath): logger.info("generating examples from = %s", filepath) with open(filepath, encoding="cp1252") as f: data = csv.reader(f, delimiter="\t") ids = list() tokens = list() pos_tags = list() for id_, line in enumerate(data): #print(line) if len(line) == 1: if tokens: yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} ids = list() tokens = list() pos_tags = list() else: ids.append(line[0]) tokens.append(line[1]) pos_tags.append(line[2]) # last example yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} def main(): dataset = datasets.load_dataset( "data_loading.py", data_files={ "train": "train.tsv", "test": "test.tsv", "val": "val.tsv" } ) #print(dataset) if __name__=="__main__": main() ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2000/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1999/comments
https://api.github.com/repos/huggingface/datasets/issues/1999/events
https://github.com/huggingface/datasets/pull/1999
823,753,591
MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy
1,999
Add FashionMNIST dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nI have added the changes from the review." ]
1,615,066,617,000
1,615,283,531,000
1,615,283,531,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1999", "html_url": "https://github.com/huggingface/datasets/pull/1999", "diff_url": "https://github.com/huggingface/datasets/pull/1999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1999.patch", "merged_at": 1615283531000 }
This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1999/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1998/comments
https://api.github.com/repos/huggingface/datasets/issues/1998/events
https://github.com/huggingface/datasets/pull/1998
823,723,960
MDExOlB1bGxSZXF1ZXN0NTg2MTE4NTQ4
1,998
Add -DOCSTART- note to dataset card of conll-like datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice catch! Yes I didn't check the actual data, instead I was just looking for the `if line.startswith(\"-DOCSTART-\")` pattern." ]
1,615,057,709,000
1,615,429,207,000
1,615,429,207,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1998", "html_url": "https://github.com/huggingface/datasets/pull/1998", "diff_url": "https://github.com/huggingface/datasets/pull/1998.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1998.patch", "merged_at": null }
Closes #1983
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1998/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1997/comments
https://api.github.com/repos/huggingface/datasets/issues/1997/events
https://github.com/huggingface/datasets/issues/1997
823,679,465
MDU6SXNzdWU4MjM2Nzk0NjU=
1,997
from datasets import MoleculeDataset, GEOMDataset
{ "login": "futianfan", "id": 5087210, "node_id": "MDQ6VXNlcjUwODcyMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5087210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/futianfan", "html_url": "https://github.com/futianfan", "followers_url": "https://api.github.com/users/futianfan/followers", "following_url": "https://api.github.com/users/futianfan/following{/other_user}", "gists_url": "https://api.github.com/users/futianfan/gists{/gist_id}", "starred_url": "https://api.github.com/users/futianfan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/futianfan/subscriptions", "organizations_url": "https://api.github.com/users/futianfan/orgs", "repos_url": "https://api.github.com/users/futianfan/repos", "events_url": "https://api.github.com/users/futianfan/events{/privacy}", "received_events_url": "https://api.github.com/users/futianfan/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,615,045,819,000
1,615,047,206,000
1,615,047,206,000
NONE
null
null
null
I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1997/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1996/comments
https://api.github.com/repos/huggingface/datasets/issues/1996/events
https://github.com/huggingface/datasets/issues/1996
823,573,410
MDU6SXNzdWU4MjM1NzM0MTA=
1,996
Error when exploring `arabic_speech_corpus`
{ "login": "elgeish", "id": 6879673, "node_id": "MDQ6VXNlcjY4Nzk2NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elgeish", "html_url": "https://github.com/elgeish", "followers_url": "https://api.github.com/users/elgeish/followers", "following_url": "https://api.github.com/users/elgeish/following{/other_user}", "gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}", "starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elgeish/subscriptions", "organizations_url": "https://api.github.com/users/elgeish/orgs", "repos_url": "https://api.github.com/users/elgeish/repos", "events_url": "https://api.github.com/users/elgeish/events{/privacy}", "received_events_url": "https://api.github.com/users/elgeish/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Thanks for reporting! We'll fix that as soon as possible", "Actually soundfile is not a dependency of this dataset.\r\nThe error comes from a bug that was fixed in this commit: https://github.com/huggingface/datasets/pull/1767/commits/c304e63629f4453367de2fd42883a78768055532\r\nBasically the library used to consider the `import soundfile` in the docstring as a dependency, while it's just here as a code example.\r\n\r\nUpdating the viewer to the latest version of `datasets` should fix this issue\r\n" ]
1,615,010,120,000
1,615,288,345,000
null
NONE
null
null
null
Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus Error: ``` ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance' Traceback: File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 233, in <module> configs = get_confs(option) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 604, in wrapped_func return get_or_create_cached_value() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 588, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 145, in get_confs module_path = nlp.load.prepare_module(path, dataset=True File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py", line 342, in prepare_module f"To be able to use this {module_type}, you need to install the following dependencies" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1996/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1995/comments
https://api.github.com/repos/huggingface/datasets/issues/1995/events
https://github.com/huggingface/datasets/pull/1995
822,878,431
MDExOlB1bGxSZXF1ZXN0NTg1NDI5NTg0
1,995
[Timit_asr] Make sure not only the first sample is used
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc @lhoestq @vrindaprabhu", "Failing `run (push)` is unrelated -> merging", "Thanks for fixing this, it was affecting my runs for https://github.com/huggingface/transformers/pull/10581/", "I am seeing this very late! Sorry for the blunder everyone! :(" ]
1,614,933,771,000
1,625,034,353,000
1,614,934,739,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1995", "html_url": "https://github.com/huggingface/datasets/pull/1995", "diff_url": "https://github.com/huggingface/datasets/pull/1995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1995.patch", "merged_at": 1614934739000 }
When playing around with timit I noticed that only the first sample is used for all indices. I corrected this typo so that the dataset is correctly loaded.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1995/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1995/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1994/comments
https://api.github.com/repos/huggingface/datasets/issues/1994/events
https://github.com/huggingface/datasets/issues/1994
822,871,238
MDU6SXNzdWU4MjI4NzEyMzg=
1,994
not being able to get wikipedia es language
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en/de/fr currently works, but I need all the languages more or less. thanks ", "Hi @dorost1234, I think I can help you a little. I’ve processed some Wikipedia datasets (Spanish inclusive) using the HF/datasets library during recent research.\r\n\r\n@lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234\r\n\r\n", "Thank you so much @jonatasgrosman , I greatly appreciate your help with them. \r\nYes, I unfortunately does not have access to a good resource and need it for my\r\nresearch. I greatly appreciate @lhoestq your help with uploading the processed datasets in huggingface datasets. This would be really helpful for some users like me with not access to high-memory GPU resources.\r\n\r\nthank you both so much again.\r\n\r\nOn Sat, Mar 6, 2021 at 12:55 AM Jonatas Grosman <notifications@github.com>\r\nwrote:\r\n\r\n> Hi @dorost1234 <https://github.com/dorost1234>, I think I can help you a\r\n> little. I’ve processed some Wikipedia datasets (Spanish inclusive) using\r\n> the HF/datasets library during recent research.\r\n>\r\n> @lhoestq <https://github.com/lhoestq> Could you help me to upload these\r\n> preprocessed datasets to Huggingface's repositories? To be more precise,\r\n> I've built datasets from the following languages using the 20201201 dumps:\r\n> Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish.\r\n> Process these datasets have high costs that most of the community can't\r\n> afford. I think these preprocessed datasets I have could be helpful for\r\n> someone without access to high-resource machines to process Wikipedia's\r\n> dumps like @dorost1234 <https://github.com/dorost1234>\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/1994#issuecomment-791798195>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMWMK5GFJFU3ACCJFUDTCFVNZANCNFSM4YUZIF4A>\r\n> .\r\n>\r\n", "Hi @dorost1234, so sorry, but looking at my files here, I figure out that I've preprocessed files using the HF/datasets for all the languages previously listed by me (Portuguese, Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my tests I've used the [wikicorpus](https://www.cs.upc.edu/~nlp/wikicorpus/) instead).\r\n\r\nOnly with the Spanish Wikipedia's dump, I had the same `KeyError: '000nbsp'` problem already reported here https://github.com/huggingface/datasets/issues/577\r\n\r\nSo nowadays, even with access to a high resource machine, you couldn't be able to get Wikipedia's Spanish data using the HF/datasets :(\r\n\r\n\r\n\r\n\r\n", "Thanks a lot for the information and help. This would be great to have\nthese datasets.\n@lhoestq <https://github.com/lhoestq> Do you know a way I could get\nsmaller amount of these data like 1 GBtype of each language to deal with\ncomputatioanl requirements? thanks\n\nOn Sat, Mar 6, 2021 at 5:36 PM Jonatas Grosman <notifications@github.com>\nwrote:\n\n> Hi @dorost1234 <https://github.com/dorost1234>, so sorry, but looking at\n> my files here, I figure out that I've preprocessed files using the\n> HF/datasets for all the languages previously listed by me (Portuguese,\n> Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my\n> tests I've used the wikicorpus <https://www.cs.upc.edu/~nlp/wikicorpus/>\n> instead).\n>\n> Only with the Spanish Wikipedia's dump, I had the same KeyError: '000nbsp'\n> problem already reported here #577\n> <https://github.com/huggingface/datasets/issues/577>\n>\n> So nowadays, even with access to a high resource machine, you couldn't be\n> able to get Wikipedia's Spanish data using the HF/datasets :(\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/1994#issuecomment-791985546>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMWMO7WOHWLOROPD6Q3TCJKXPANCNFSM4YUZIF4A>\n> .\n>\n", "Hi ! As mentioned above the Spanish configuration have parsing issues from `mwparserfromhell`. I haven't tested with the latest `mwparserfromhell` >=0.6 though. Which version of `mwparserfromhell` are you using ?\r\n\r\n> @lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234\r\n\r\nThat would be awesome ! Feel free to ping me on slack so we can put the processed wikipedia files on google storage with the other ones we've already preprocessed.\r\n\r\n> Do you know a way I could get smaller amount of these data like 1 GBtype of each language to deal with computatioanl requirements? thanks\r\n\r\nI'd suggest to copy the [wikipedia.py](https://github.com/huggingface/datasets/blob/master/datasets/wikipedia/wikipedia.py) to a new script `custom_wikipedia.py` and modify it to only download and process only a subset of the raw data files.\r\nYou can for example replace [this line](https://github.com/huggingface/datasets/blob/64e59fc45ca2134218b3e42e83fddddbe840ff74/datasets/wikipedia/wikipedia.py#L446) by:\r\n```python\r\n if total_bytes >= (1 << 30): # stop if the total amount of data is >= 1GB\r\n break\r\n else:\r\n xml_urls.append(_base_url(lang) + fname)\r\n```\r\n\r\nThen you can load your custom wikipedia dataset with\r\n```python\r\nload_dataset(\"path/to/my/custom_wikipedia.py\", f\"{date}.{language}\")\r\n```", "Hi @lhoestq!\r\n\r\n> Hi ! As mentioned above the Spanish configuration have parsing issues from mwparserfromhell. I haven't tested with the latest mwparserfromhell >=0.6 though. Which version of mwparserfromhell are you using ?\r\n\r\nI'm using the latest mwparserfromhell version (0.6)\r\n\r\n> That would be awesome ! Feel free to ping me on slack so we can put the processed wikipedia files on google storage with the other ones we've already preprocessed.\r\n\r\nI'll ping you there 👍 ", "Thank you so much @jonatasgrosman and @lhoestq this would be a great help. I am really thankful to you both and to wonderful Huggingface dataset library allowing us to train models at scale." ]
1,614,933,108,000
1,615,495,581,000
null
NONE
null
null
null
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/load.py", line 612, in load_dataset ignore_verifications=ignore_verifications, File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 1050, in _download_and_prepare "\n\t`{}`".format(usage_example) datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')` thanks @lhoestq for any suggestion/help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1994/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1993/comments
https://api.github.com/repos/huggingface/datasets/issues/1993/events
https://github.com/huggingface/datasets/issues/1993
822,758,387
MDU6SXNzdWU4MjI3NTgzODc=
1,993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! That looks like a bug, can you provide some code so that we can reproduce ?\r\nIt's not supposed to update the original dataset", "Hi, I experimented with RAG. \r\n\r\nActually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/research_projects/rag/use_own_knowledge_dataset.py#L80). In the 80 you can save the dataset object to the disk with save_to_disk. Then in order to compute the embeddings in this use **load_from_disk**. \r\n\r\nThen finally save it. You can see the original dataset object (CSV after splitting also will be changed)\r\n\r\nOne more thing- when I save the dataset object with **save_to_disk** it name the arrow file with cache.... rather than using dataset. arrow. Can you add a variable that we can feed a name to save_to_disk function?", "@lhoestq I also found that cache in tmp directory gets updated after transformations. This is really problematic when using datasets interactively. Let's say we use the shards function to a dataset loaded with csv, atm when we do transformations to shards and combine them it updates the original csv cache. ", "I plan to update the save_to_disk method in #2025 so I can make sure the new save_to_disk doesn't corrupt your cache files.\r\nBut from your last message it looks like save_to_disk isn't the root cause right ?", "ok, one more thing. When we use save_to_disk there are two files other than .arrow. dataset_info.json and state.json. Sometimes most of the fields in the dataset_infor.json are null, especially when saving dataset objects. Anyways I think load_from_disk uses the arrow files mentioned in state.json right? ", "> Anyways I think load_from_disk uses the arrow files mentioned in state.json right?\r\n\r\nYes exactly", "Perfect. For now, I am loading the dataset from CSV in my interactive process and will wait until you make the PR!" ]
1,614,921,950,000
1,616,385,950,000
1,616,385,950,000
NONE
null
null
null
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1993/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1992/comments
https://api.github.com/repos/huggingface/datasets/issues/1992/events
https://github.com/huggingface/datasets/issues/1992
822,672,238
MDU6SXNzdWU4MjI2NzIyMzg=
1,992
`datasets.map` multi processing much slower than single processing
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I/O operations being performed.", "I see that many people are experiencing the same issue. Is this problem considered an \"official\" bug that is worth a closer look? @lhoestq", "Yes this is an official bug. On my side I haven't managed to reproduce it but @theo-m has. We'll investigate this !", "Thank you for the reply! I would be happy to follow the discussions related to the issue.\r\nIf you do not mind, could you also give a little more explanation on my p.s.2? I am having a hard time figuring out why the single processing `map` uses all of my cores.\r\n@lhoestq @theo-m ", "Regarding your ps2: It depends what function you pass to `map`.\r\nFor example, fast tokenizers from `transformers` in Rust tokenize texts and parallelize the tokenization over all the cores.", "I am still experiencing this issue with datasets 1.9.0..\r\nHas there been a further investigation? \r\n<img width=\"442\" alt=\"image\" src=\"https://user-images.githubusercontent.com/29157715/126143387-8b5ddca2-a896-4e18-abf7-4fbf62a48b41.png\">\r\n" ]
1,614,910,202,000
1,626,689,109,000
null
NONE
null
null
null
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer. I noticed that `map` function with `num_proc=mp.cpu_count() //2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running. What could be the reason? I would be happy to provide information necessary to spot the reason. p.s. I was experiencing the imbalance issue mentioned in [here](https://github.com/huggingface/datasets/issues/610#issuecomment-705177036) when I was using multi processing. p.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work. ![Screen Shot 2021-03-05 at 11 04 59](https://user-images.githubusercontent.com/29157715/110056895-ef6cf000-7da2-11eb-8307-6698e9fb1ad4.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1992/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1992/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1991/comments
https://api.github.com/repos/huggingface/datasets/issues/1991/events
https://github.com/huggingface/datasets/pull/1991
822,554,473
MDExOlB1bGxSZXF1ZXN0NTg1MTYwNDkx
1,991
Adding the conllpp dataset
{ "login": "ZihanWangKi", "id": 21319243, "node_id": "MDQ6VXNlcjIxMzE5MjQz", "avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZihanWangKi", "html_url": "https://github.com/ZihanWangKi", "followers_url": "https://api.github.com/users/ZihanWangKi/followers", "following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}", "gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions", "organizations_url": "https://api.github.com/users/ZihanWangKi/orgs", "repos_url": "https://api.github.com/users/ZihanWangKi/repos", "events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}", "received_events_url": "https://api.github.com/users/ZihanWangKi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the reviews! A note that I have addressed the comments, and waiting for a further review." ]
1,614,896,383,000
1,615,977,459,000
1,615,977,459,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1991", "html_url": "https://github.com/huggingface/datasets/pull/1991", "diff_url": "https://github.com/huggingface/datasets/pull/1991.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1991.patch", "merged_at": 1615977459000 }
Adding the conllpp dataset, is a revision from https://github.com/huggingface/datasets/pull/1910.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1991/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1990/comments
https://api.github.com/repos/huggingface/datasets/issues/1990/events
https://github.com/huggingface/datasets/issues/1990
822,384,502
MDU6SXNzdWU4MjIzODQ1MDI=
1,990
OSError: Memory mapping file failed: Cannot allocate memory
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you", "It's not trying to bring the dataset into memory.\r\n\r\nActually, it's trying to memory map the dataset file, which is different. It allows to load large dataset files without filling up memory.\r\n\r\nWhat dataset did you use to get this error ?\r\nOn what OS are you running ? What's your python and pyarrow version ?", "Dear @lhoestq \r\nthank you so much for coming back to me. Please find info below:\r\n1) Dataset name: I used wikipedia with config 20200501.en\r\n2) I got these pyarrow in my environment:\r\npyarrow 2.0.0 <pip>\r\npyarrow 3.0.0 <pip>\r\n\r\n3) python version 3.7.10\r\n4) OS version \r\n\r\nlsb_release -a\r\nNo LSB modules are available.\r\nDistributor ID:\tDebian\r\nDescription:\tDebian GNU/Linux 10 (buster)\r\nRelease:\t10\r\nCodename:\tbuster\r\n\r\n\r\nIs there a way I could solve the memory issue and if I could run this model, I am using GeForce GTX 108, \r\nthanks \r\n", "I noticed that the error happens when loading the validation dataset.\r\nWhat value of `data_args.validation_split_percentage` did you use ?", "Dear @lhoestq \r\n\r\nthank you very much for the very sharp observation, indeed, this happens there, I use the default value of 5, I basically plan to subsample a part of the large dataset and choose it as validation set. Do you think this is bringing the data into memory during subsampling? Is there a way I could avoid this?\r\n\r\nThank you very much for the great help.\r\n\r\n\r\nOn Mon, Mar 8, 2021 at 11:28 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> I noticed that the error happens when loading the validation dataset.\r\n> What value of data_args.validation_split_percentage did you use ?\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/1990#issuecomment-792655644>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMS337ZUJ7HGGVVCCR3TCSREFANCNFSM4YTYAQ2A>\r\n> .\r\n>\r\n", "Methods like `dataset.shard`, `dataset.train_test_split`, `dataset.select` etc. don't bring the dataset in memory. \r\nThe only time when samples are brought to memory is when you access elements via `dataset[0]`, `dataset[:10]`, `dataset[\"my_column_names\"]`.\r\n\r\nBut it's possible that trying to use those methods to build your validation set doesn't fix the issue since, if I understand correctly, the error happens when when the dataset arrow file is opened (just before the 5% percentage is applied).\r\n\r\nDid you try to reproduce this issue in a google colab ? This would be super helpful to investigate why this happened.\r\n\r\nAlso maybe you can try clearing your cache at `~/.cache/huggingface/datasets` and try again. If the arrow file was corrupted somehow, removing it and rebuilding may fix the issue." ]
1,614,882,118,000
1,628,100,265,000
1,628,100,265,000
NONE
null
null
null
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128 ``` I am using transformer version: 4.3.2 But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset? Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions: ``` File "run_mlm.py", line 441, in <module> main() File "run_mlm.py", line 233, in main split=f"train[{data_args.validation_split_percentage}%:]", File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset map_tuple=True, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table stream = stream_from(filename) File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1990/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1989/comments
https://api.github.com/repos/huggingface/datasets/issues/1989/events
https://github.com/huggingface/datasets/issues/1989
822,328,147
MDU6SXNzdWU4MjIzMjgxNDc=
1,989
Question/problem with dataset labels
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "It seems that I get parsing errors for various fields in my data. For example now I get this:\r\n```\r\n File \"../../../models/tr-4.3.2/run_puppets.py\", line 523, in <module>\r\n main()\r\n File \"../../../models/tr-4.3.2/run_puppets.py\", line 249, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files)\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py\", line 572, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py\", line 650, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py\", line 1028, in _prepare_split\r\n writer.write_table(table)\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 292, in write_table\r\n pa_table = pa_table.cast(self._schema)\r\n File \"pyarrow/table.pxi\", line 1311, in pyarrow.lib.Table.cast\r\n File \"pyarrow/table.pxi\", line 265, in pyarrow.lib.ChunkedArray.cast\r\n File \"/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py\", line 87, in cast\r\n return call_function(\"cast\", [arr], options)\r\n File \"pyarrow/_compute.pyx\", line 298, in pyarrow._compute.call_function\r\n File \"pyarrow/_compute.pyx\", line 192, in pyarrow._compute.Function.call\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Failed to parse string: https://www.netgalley.com/catalog/book/121872\r\n```", "Not sure if this helps, this is how I load my files (as in the sample scripts on transformers):\r\n\r\n```\r\n if data_args.train_file.endswith(\".csv\"):\r\n # Loading a dataset from local csv files\r\n datasets = load_dataset(\"csv\", data_files=data_files)\r\n```", "Since this worked out of the box in a few examples before, I wonder if it's some quoting issue or something else. ", "Hi @ioana-blue,\r\nCan you share a sample from your .csv? A dummy where you get this error will also help.\r\n\r\nI tried this csv:\r\n```csv\r\nfeature,label\r\n1.2,not nurse\r\n1.3,nurse\r\n1.5,surgeon\r\n```\r\nand the following snippet:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"csv\",data_files=['test.csv'])\r\n\r\nprint(d)\r\nprint(d['train']['label'])\r\n```\r\nand this works perfectly fine for me:\r\n```sh\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['feature', 'label'],\r\n num_rows: 3\r\n })\r\n})\r\n['not nurse', 'nurse', 'surgeon']\r\n```\r\nI'm sure your csv is more complicated than this one. But it is hard to tell where the issue might be without looking at a sample.", "I've had versions where it worked fain. For this dataset, I had all kind of parsing issues that I couldn't understand. What I ended up doing is strip all the columns that I didn't need and also make the label 0/1. \r\n\r\nI think one line that may have caused a problem was the csv version of this:\r\n\r\n```crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job. ^M ('Rose', '', 'Blakey') journalist F 38 journalist https://www.netgalley.com/catalog/book/121872 _ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.```\r\n\r\nThe error I got in this case is this one: https://github.com/huggingface/datasets/issues/1989#issuecomment-790842771\r\n\r\nNote, this line was part of a much larger file and until this line I guess it was working fine. ", "Hi @ioana-blue,\r\n\r\nWhat is the separator you're using for the csv? I see there are only two commas in the given line, but they don't seem like appropriate points. Also, is this a string part of one line, or an entire line? There should also be a label, right?", "Sorry for the confusion, the sample above was from a tsv that was used to derive the csv. Let me construct the csv again (I had remove it). \r\n\r\nThis is the line in the csv - this is the whole line:\r\n```crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead,\"('Rose', '', 'Blakey')\",journalist,F,38,journalist,https://www.netgalley.com/catalog/book/121872,_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job., She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.```", "Hi,\r\nJust in case you want to use tsv directly, you can use the separator argument while loading the dataset.\r\n```python\r\nd = load_dataset(\"csv\",data_files=['test.csv'],sep=\"\\t\")\r\n```\r\n\r\nAdditionally, I don't face the issues with the following csv (same as the one you provided):\r\n\r\n```sh\r\nlink1,text1,info1,info2,info3,info4,info5,link2,text2,text3\r\ncrawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead,\"('Rose', '', 'Blakey')\",journalist,F,38,journalist,https://www.netgalley.com/catalog/book/121872,_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job., She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.\r\n```\r\nOutput after loading:\r\n```sh\r\n{'link1': 'crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz', 'text1': 'Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead', 'info1': \"('Rose', '', 'Blakey')\", 'info2': 'journalist', 'info3': 'F', 'info4': 38, 'info5': 'journalist', 'link2': 'https://www.netgalley.com/catalog/book/121872', 'text2': '_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job.', 'text3': ' She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.'}\r\n```\r\nCan you check once if the tsv works for you directly using the separator argument? The conversion from tsv to csv could create issues, I'm only guessing though.", "thanks for the tip. very strange :/ I'll check my datasets version as well. \r\n\r\nI will have more similar experiments soon so I'll let you know if I manage to get rid of this. ", "No problem at all. I thought I'd be able to solve this but I'm unable to replicate the issue :/" ]
1,614,877,613,000
1,615,455,855,000
null
NONE
null
null
null
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module> main() File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main datasets = load_dataset("csv", data_files=data_files) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 1028, in _prepare_split writer.write_table(table) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py", line 292, in write_table pa_table = pa_table.cast(self._schema) File "pyarrow/table.pxi", line 1311, in pyarrow.lib.Table.cast File "pyarrow/table.pxi", line 265, in pyarrow.lib.ChunkedArray.cast File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py", line 87, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Failed to parse string: not nurse ``` Any ideas how to fix this? For now, I'll probably make them numeric.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1989/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1988/comments
https://api.github.com/repos/huggingface/datasets/issues/1988/events
https://github.com/huggingface/datasets/issues/1988
822,324,605
MDU6SXNzdWU4MjIzMjQ2MDU=
1,988
Readme.md is misleading about kinds of datasets?
{ "login": "surak", "id": 878399, "node_id": "MDQ6VXNlcjg3ODM5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/878399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surak", "html_url": "https://github.com/surak", "followers_url": "https://api.github.com/users/surak/followers", "following_url": "https://api.github.com/users/surak/following{/other_user}", "gists_url": "https://api.github.com/users/surak/gists{/gist_id}", "starred_url": "https://api.github.com/users/surak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surak/subscriptions", "organizations_url": "https://api.github.com/users/surak/orgs", "repos_url": "https://api.github.com/users/surak/repos", "events_url": "https://api.github.com/users/surak/events{/privacy}", "received_events_url": "https://api.github.com/users/surak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)" ]
1,614,877,460,000
1,628,100,323,000
1,628,100,323,000
NONE
null
null
null
Hi! At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. " But here: https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117 You mention other kinds of datasets, with images and so on. I'm confused. Is it possible to use it to store, say, imagenet locally?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1988/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1987/comments
https://api.github.com/repos/huggingface/datasets/issues/1987/events
https://github.com/huggingface/datasets/issues/1987
822,308,956
MDU6SXNzdWU4MjIzMDg5NTY=
1,987
wmt15 is broken
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,614,876,385,000
1,614,876,385,000
null
CONTRIBUTOR
null
null
null
While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken: ``` python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")' Downloading: 2.91kB [00:00, 818kB/s] Downloading: 3.02kB [00:00, 897kB/s] Downloading: 41.1kB [00:00, 19.1MB/s] Downloading and preparing dataset wmt15/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt15/de-en/1.0.0/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f... Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 578, in download_and_prepare self._download_and_prepare( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 634, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt15/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f/wmt_utils.py", line 757, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 283, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 203, in map_nested mapped = [ File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested return function(data_struct) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 214, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wmt/wmt15/resolve/main/training-parallel-nc-v10.tgz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1987/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1986/comments
https://api.github.com/repos/huggingface/datasets/issues/1986/events
https://github.com/huggingface/datasets/issues/1986
822,176,290
MDU6SXNzdWU4MjIxNzYyOTA=
1,986
wmt datasets fail to load
{ "login": "sabania", "id": 32322564, "node_id": "MDQ6VXNlcjMyMzIyNTY0", "avatar_url": "https://avatars.githubusercontent.com/u/32322564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sabania", "html_url": "https://github.com/sabania", "followers_url": "https://api.github.com/users/sabania/followers", "following_url": "https://api.github.com/users/sabania/following{/other_user}", "gists_url": "https://api.github.com/users/sabania/gists{/gist_id}", "starred_url": "https://api.github.com/users/sabania/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sabania/subscriptions", "organizations_url": "https://api.github.com/users/sabania/orgs", "repos_url": "https://api.github.com/users/sabania/repos", "events_url": "https://api.github.com/users/sabania/events{/privacy}", "received_events_url": "https://api.github.com/users/sabania/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "caching issue, seems to work again.." ]
1,614,867,535,000
1,614,868,267,000
1,614,868,267,000
NONE
null
null
null
~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager) 758 # Extract manually downloaded files. 759 manual_files = dl_manager.extract(manual_paths_dict) --> 760 extraction_map = dict(downloaded_files, **manual_files) 761 762 for language in self.config.language_pair: TypeError: type object argument after ** must be a mapping, not list
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1986/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1985/comments
https://api.github.com/repos/huggingface/datasets/issues/1985/events
https://github.com/huggingface/datasets/pull/1985
822,170,651
MDExOlB1bGxSZXF1ZXN0NTg0ODM4NjIw
1,985
Optimize int precision
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq, are the tests OK? Some other cases I missed? Do you agree with this approach?", "I just tested this and it works like a charm :) \r\n\r\nHowever tokenizing and then setting the format to \"torch\" to feed the tokens into a model doesn't seem to work anymore, since the pytorch tensors have the int32/int8 precisions instead of int64 that is required as model inputs.\r\n\r\nFor example:\r\n\r\n```python\r\nimport torch\r\nfrom datasets import Dataset\r\nfrom transformers import BertModel, BertTokenizer\r\n\r\ntorch.set_grad_enabled(False)\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\n\r\ndataset = Dataset.from_dict({\"text\": [\"hello there !\"]})\r\ndataset = dataset.map(tokenizer, input_columns=\"text\", remove_columns=dataset.column_names)\r\ndataset = dataset.with_format(\"torch\")\r\n\r\nprint(dataset.features)\r\n# {'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),\r\n# 'input_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), # this should be int32 though\r\n# 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)}\r\n\r\nmodel(**dataset[:1])\r\n# RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.CharTensor instead (while checking arguments for embedding)\r\n\r\ndataset = dataset.with_format(\"torch\", dtype=torch.int64)\r\n\r\nmodel(**dataset[:1])\r\n# works as expected\r\n```\r\n\r\nPinging @sgugger here to make sure we take the right decision here.\r\n\r\nDo we want the \"torch\" format to always return int64 ? Or does it have to keep the precision defined by the `dataset.features` \r\n and therefore we would need to specify \"torch\" with `dtype=torch.int64` ?", "From a user perspective, I think it's fine if the \"torch\" format converts all ints types to `torch.int64` by default since it's what the model will need almost all the time. I don't see a case where you would want to keep the low precision at the top of my head, and one can always write a custom transform for an edge case.", "Sounds good to me !\r\nFor consistency maybe we should make the float precision fixed as well (float32, I guess)", "Yes, that would be the one used by default.", "Do we have the same requirements for TensorFlow?", "Yes I we should do the same for tensorflow as well since tf models would have the same issue\r\n\r\nThanks for adding this :)", "@lhoestq I think this PR is ready... :)" ]
1,614,867,143,000
1,616,414,680,000
1,615,887,840,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1985", "html_url": "https://github.com/huggingface/datasets/pull/1985", "diff_url": "https://github.com/huggingface/datasets/pull/1985.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1985.patch", "merged_at": 1615887840000 }
Optimize int precision to reduce dataset file size. Close #1973, close #1825, close #861.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1985/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1985/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1984/comments
https://api.github.com/repos/huggingface/datasets/issues/1984/events
https://github.com/huggingface/datasets/issues/1984
821,816,588
MDU6SXNzdWU4MjE4MTY1ODg=
1,984
Add tests for WMT datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,614,840,402,000
1,614,840,402,000
null
MEMBER
null
null
null
As requested in #1981, we need tests for WMT datasets, using dummy data.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1984/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1984/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1983/comments
https://api.github.com/repos/huggingface/datasets/issues/1983/events
https://github.com/huggingface/datasets/issues/1983
821,746,008
MDU6SXNzdWU4MjE3NDYwMDg=
1,983
The size of CoNLL-2003 is not consistant with the official release.
{ "login": "h-peng17", "id": 39556019, "node_id": "MDQ6VXNlcjM5NTU2MDE5", "avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h-peng17", "html_url": "https://github.com/h-peng17", "followers_url": "https://api.github.com/users/h-peng17/followers", "following_url": "https://api.github.com/users/h-peng17/following{/other_user}", "gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}", "starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions", "organizations_url": "https://api.github.com/users/h-peng17/orgs", "repos_url": "https://api.github.com/users/h-peng17/repos", "events_url": "https://api.github.com/users/h-peng17/events{/privacy}", "received_events_url": "https://api.github.com/users/h-peng17/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi,\r\n\r\nif you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered out in our implementation.\r\n\r\n@lhoestq What do you think about including these lines? ([Link](https://github.com/flairNLP/flair/issues/1097) to a similar issue in the flairNLP repo)", "We should mention in the Conll2003 dataset card that these lines have been removed indeed.\r\n\r\nIf some users are interested in using these lines (maybe to recombine documents ?) then we can add a parameter to the conll2003 dataset to include them.\r\n\r\nBut IMO the default config should stay the current one (without the `-DOCSTART-` stuff), so that you can directly train NER models without additional preprocessing. Let me know what you think", "@lhoestq Yes, I agree adding a small note should be sufficient.\r\n\r\nCurrently, NLTK's `ConllCorpusReader` ignores the `-DOCSTART-` lines so I think it's ok if we do the same. If there is an interest in the future to use these lines, then we can include them.", "I added a mention of this in conll2003's dataset card:\r\nhttps://github.com/huggingface/datasets/blob/fc9796920da88486c3b97690969aabf03d6b4088/datasets/conll2003/README.md#conll2003\r\n\r\nEdit: just saw your PR @mariosasko (noticed it too late ^^)\r\nLet me take a look at it :)" ]
1,614,832,894,000
1,615,220,665,000
null
NONE
null
null
null
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions. The statistics of conll-2003 in this repo is : \#train 14041 \#dev 3250 \#test 3453 While the official statistics is: \#train 14987 \#dev 3466 \#test 3684 Wish for your reply~
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1983/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1982/comments
https://api.github.com/repos/huggingface/datasets/issues/1982/events
https://github.com/huggingface/datasets/pull/1982
821,448,791
MDExOlB1bGxSZXF1ZXN0NTg0MjM2NzQ0
1,982
Fix NestedDataStructure.data for empty dict
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I validated that this fixed the problem, thank you, @albertvillanova!\r\n", "still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n 758 # Extract manually downloaded files.\r\n 759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n 761 \r\n 762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list", "Hi @sabania \r\nWe released a patch version that fixes this issue (1.4.1), can you try with the new version please ?\r\n```\r\npip install --upgrade datasets\r\n```", "I re-validated with the hotfix and the problem is no more.", "It's working. thanks a lot." ]
1,614,802,611,000
1,614,876,364,000
1,614,811,716,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1982", "html_url": "https://github.com/huggingface/datasets/pull/1982", "diff_url": "https://github.com/huggingface/datasets/pull/1982.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1982.patch", "merged_at": 1614811716000 }
Fix #1981
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1982/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1982/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1981/comments
https://api.github.com/repos/huggingface/datasets/issues/1981/events
https://github.com/huggingface/datasets/issues/1981
821,411,109
MDU6SXNzdWU4MjE0MTExMDk=
1,981
wmt datasets fail to load
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "@stas00 Mea culpa... May I fix this tomorrow morning?", "yes, of course, I reverted to the version before that and it works ;)\r\n\r\nbut since a new release was just made you will probably need to make a hotfix.\r\n\r\nand add the wmt to the tests?", "Sure, I will implement a regression test!", "@stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it?", "I'll do a patch release for this issue early tomorrow.\r\n\r\nAnd yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :)", "still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n758 # Extract manually downloaded files.\r\n759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n761\r\n762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list" ]
1,614,799,299,000
1,614,867,407,000
1,614,811,716,000
CONTRIBUTOR
null
null
null
on master: ``` python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")' Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e... Traceback (most recent call last): File "<string>", line 1, in <module> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 578, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 634, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt14/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e/wmt_utils.py", line 760, in _split_generators extraction_map = dict(downloaded_files, **manual_files) ``` it worked fine recently. same problem if I try wmt16. git bisect points to this commit from Feb 25 as the culprit https://github.com/huggingface/datasets/commit/792f1d9bb1c5361908f73e2ef7f0181b2be409fa @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1981/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1981/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1980/comments
https://api.github.com/repos/huggingface/datasets/issues/1980/events
https://github.com/huggingface/datasets/pull/1980
821,312,810
MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy
1,980
Loading all answers from drop
{ "login": "KaijuML", "id": 25499439, "node_id": "MDQ6VXNlcjI1NDk5NDM5", "avatar_url": "https://avatars.githubusercontent.com/u/25499439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KaijuML", "html_url": "https://github.com/KaijuML", "followers_url": "https://api.github.com/users/KaijuML/followers", "following_url": "https://api.github.com/users/KaijuML/following{/other_user}", "gists_url": "https://api.github.com/users/KaijuML/gists{/gist_id}", "starred_url": "https://api.github.com/users/KaijuML/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KaijuML/subscriptions", "organizations_url": "https://api.github.com/users/KaijuML/orgs", "repos_url": "https://api.github.com/users/KaijuML/repos", "events_url": "https://api.github.com/users/KaijuML/events{/privacy}", "received_events_url": "https://api.github.com/users/KaijuML/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice thanks for the change !\r\nThis looks all good to me\r\n\r\nBefore we merge can you just update the dataset_infos.json file of drop ? You can do it by running\r\n```\r\ndatasets-cli test ./datasets/drop --all_configs --save_infos --ignore_verifications\r\n```", "Done!" ]
1,614,791,587,000
1,615,807,646,000
1,615,807,646,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1980", "html_url": "https://github.com/huggingface/datasets/pull/1980", "diff_url": "https://github.com/huggingface/datasets/pull/1980.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1980.patch", "merged_at": 1615807646000 }
Hello all, I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date"). I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files. Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them. Let me know if there is anything else I can do, Clément
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1980/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1979/comments
https://api.github.com/repos/huggingface/datasets/issues/1979/events
https://github.com/huggingface/datasets/pull/1979
820,977,853
MDExOlB1bGxSZXF1ZXN0NTgzODQ3MTk3
1,979
Add article_id and process test set template for semeval 2020 task 11…
{ "login": "hemildesai", "id": 8195444, "node_id": "MDQ6VXNlcjgxOTU0NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hemildesai", "html_url": "https://github.com/hemildesai", "followers_url": "https://api.github.com/users/hemildesai/followers", "following_url": "https://api.github.com/users/hemildesai/following{/other_user}", "gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}", "starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions", "organizations_url": "https://api.github.com/users/hemildesai/orgs", "repos_url": "https://api.github.com/users/hemildesai/repos", "events_url": "https://api.github.com/users/hemildesai/events{/privacy}", "received_events_url": "https://api.github.com/users/hemildesai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks !\r\nNow to fix the CI the only thing left is to add a dummy `test-task-tc-template.out` file inside the `dummy_data.zip` at `./datasets/sem_eval_2020_task_11/dummy/1.1.0`\r\nIt must contain the labels template for each dummy article of the test set included in `dummy_data.zip`\r\n\r\nAfter that we should be good to merge this one :)", "@lhoestq Made the changes! The failure now seems to be unrelated to the changes. Any idea what's going on?", "This is a bug on master that we're investigating. You can ignore it" ]
1,614,767,672,000
1,615,633,180,000
1,615,554,650,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1979", "html_url": "https://github.com/huggingface/datasets/pull/1979", "diff_url": "https://github.com/huggingface/datasets/pull/1979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1979.patch", "merged_at": 1615554650000 }
… dataset - `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/ - The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for the dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1979/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1978/comments
https://api.github.com/repos/huggingface/datasets/issues/1978/events
https://github.com/huggingface/datasets/pull/1978
820,956,806
MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz
1,978
Adding ro sts dataset
{ "login": "lorinczb", "id": 36982089, "node_id": "MDQ6VXNlcjM2OTgyMDg5", "avatar_url": "https://avatars.githubusercontent.com/u/36982089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorinczb", "html_url": "https://github.com/lorinczb", "followers_url": "https://api.github.com/users/lorinczb/followers", "following_url": "https://api.github.com/users/lorinczb/following{/other_user}", "gists_url": "https://api.github.com/users/lorinczb/gists{/gist_id}", "starred_url": "https://api.github.com/users/lorinczb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lorinczb/subscriptions", "organizations_url": "https://api.github.com/users/lorinczb/orgs", "repos_url": "https://api.github.com/users/lorinczb/repos", "events_url": "https://api.github.com/users/lorinczb/events{/privacy}", "received_events_url": "https://api.github.com/users/lorinczb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq thank you very much for the quick review and useful comments! \r\n\r\nI have tried to address them all, and a few comments that you left for ro_sts I have applied to the ro_sts_parallel as well (in read-me: fixed source_datasets, links to homepage, repository, leaderboard, thanks to me message, in ro_sts_parallel.py changed to camel case as well). In the ro_sts_parallel I have changed the order on the languages, also in the example, as you said order doesn't matter, but just to have them listed in the readme in the same order.\r\n\r\nI have commented above on why we would like to keep them as separate datasets, hope it makes sense.\r\n\r\nIf there is anything else I should change please let me know.\r\n\r\nThanks again!", "@lhoestq I tried to adjust the ro_sts_parallel, locally when I run the tests they are passing, but somewhere it has the old name of rosts-parallel-ro-en which I am trying to change to ro_sts_parallel. I don't think I have left anything related to rosts-parallel-ro-en, but when the dataset_infos.json is regenerated it adds it. Could you please help me out, how can I fix this? Thanks in advance!", "Great, thanks for all your help! " ]
1,614,766,133,000
1,614,938,414,000
1,614,936,835,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1978", "html_url": "https://github.com/huggingface/datasets/pull/1978", "diff_url": "https://github.com/huggingface/datasets/pull/1978.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1978.patch", "merged_at": 1614936835000 }
Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1978/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1977/comments
https://api.github.com/repos/huggingface/datasets/issues/1977/events
https://github.com/huggingface/datasets/issues/1977
820,312,022
MDU6SXNzdWU4MjAzMTIwMjI=
1,977
ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I sometimes also get this error with other languages of the same dataset:\r\n\r\n File \"/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n\r\n@lhoestq \r\n", "Hi ! Thanks for reporting\r\nSome wikipedia configurations do require the user to have `apache_beam` in order to parse the wikimedia data.\r\n\r\nOn the other hand regarding your second issue\r\n```\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n```\r\nI've never experienced this, can you open a new issue for this specific error and provide more details please ?\r\nFor example what script did you use to get this, what language did you use, what's your environment details (os, python version, pyarrow version).." ]
1,614,712,888,000
1,614,766,660,000
null
NONE
null
null
null
Hi I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset: `python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_length 256 ` I am getting this error, but as per documentation, huggingface dataset provide processed version of this dataset and users can load it without requiring setup extra settings for apache-beam. could you help me please to load this dataset? Do you think I can run run_ml.py with this dataset? or anyway I could subsample and train the model? I greatly appreciate providing the processed version of all languages for this dataset, which allow the user to use them without setting up apache-beam,. thanks I really appreciate your help. @lhoestq thanks. [1] https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py error I get: ``` >>> import datasets >>> datasets.load_dataset("wikipedia", "20200501.aa") Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /dara/temp/cache_home_2/datasets/wikipedia/20200501.aa/1.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/dara/temp/libs/anaconda3/envs/codes/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 1099, in _download_and_prepare import apache_beam as beam ModuleNotFoundError: No module named 'apache_beam' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1977/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1976/comments
https://api.github.com/repos/huggingface/datasets/issues/1976/events
https://github.com/huggingface/datasets/pull/1976
820,228,538
MDExOlB1bGxSZXF1ZXN0NTgzMjA3NDI4
1,976
Add datasets full offline mode with HF_DATASETS_OFFLINE
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,706,019,000
1,614,786,331,000
1,614,786,330,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1976", "html_url": "https://github.com/huggingface/datasets/pull/1976", "diff_url": "https://github.com/huggingface/datasets/pull/1976.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1976.patch", "merged_at": 1614786330000 }
Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939 cc @stas00
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1976/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1976/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1975
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1975/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1975/comments
https://api.github.com/repos/huggingface/datasets/issues/1975/events
https://github.com/huggingface/datasets/pull/1975
820,205,485
MDExOlB1bGxSZXF1ZXN0NTgzMTg4NjM3
1,975
Fix flake8
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,704,353,000
1,614,854,602,000
1,614,854,602,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1975", "html_url": "https://github.com/huggingface/datasets/pull/1975", "diff_url": "https://github.com/huggingface/datasets/pull/1975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1975.patch", "merged_at": 1614854602000 }
Fix flake8 style.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1975/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1974/comments
https://api.github.com/repos/huggingface/datasets/issues/1974/events
https://github.com/huggingface/datasets/pull/1974
820,122,223
MDExOlB1bGxSZXF1ZXN0NTgzMTE5MDI0
1,974
feat(docs): navigate with left/right arrow keys
{ "login": "ydcjeff", "id": 32727188, "node_id": "MDQ6VXNlcjMyNzI3MTg4", "avatar_url": "https://avatars.githubusercontent.com/u/32727188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydcjeff", "html_url": "https://github.com/ydcjeff", "followers_url": "https://api.github.com/users/ydcjeff/followers", "following_url": "https://api.github.com/users/ydcjeff/following{/other_user}", "gists_url": "https://api.github.com/users/ydcjeff/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydcjeff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydcjeff/subscriptions", "organizations_url": "https://api.github.com/users/ydcjeff/orgs", "repos_url": "https://api.github.com/users/ydcjeff/repos", "events_url": "https://api.github.com/users/ydcjeff/events{/privacy}", "received_events_url": "https://api.github.com/users/ydcjeff/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,698,690,000
1,614,854,652,000
1,614,854,568,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1974", "html_url": "https://github.com/huggingface/datasets/pull/1974", "diff_url": "https://github.com/huggingface/datasets/pull/1974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1974.patch", "merged_at": 1614854568000 }
Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot. More info : https://github.com/sphinx-doc/sphinx/pull/2064 You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1974/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1973/comments
https://api.github.com/repos/huggingface/datasets/issues/1973/events
https://github.com/huggingface/datasets/issues/1973
820,077,312
MDU6SXNzdWU4MjAwNzczMTI=
1,973
Question: what gets stored in the datasets cache and why is it so huge?
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.\r\n\r\nIf this is unexpected behavior, would be happy to help run debugging as needed.", "Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.", "Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. ", "And to clarify, it's not memory, it's disk space. Thank you!", "Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.\r\n\r\nAlso, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.\r\n\r\nSo by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).\r\nFeel free to clear your cache after your job has finished, or disable caching using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```", "Thanks for the tip, this is useful. ", "Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs.", "Thank you!" ]
1,614,695,753,000
1,617,113,039,000
1,615,887,840,000
NONE
null
null
null
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1973/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1972/comments
https://api.github.com/repos/huggingface/datasets/issues/1972/events
https://github.com/huggingface/datasets/issues/1972
819,752,761
MDU6SXNzdWU4MTk3NTI3NjE=
1,972
'Dataset' object has no attribute 'rename_column'
{ "login": "farooqzaman1", "id": 23195502, "node_id": "MDQ6VXNlcjIzMTk1NTAy", "avatar_url": "https://avatars.githubusercontent.com/u/23195502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/farooqzaman1", "html_url": "https://github.com/farooqzaman1", "followers_url": "https://api.github.com/users/farooqzaman1/followers", "following_url": "https://api.github.com/users/farooqzaman1/following{/other_user}", "gists_url": "https://api.github.com/users/farooqzaman1/gists{/gist_id}", "starred_url": "https://api.github.com/users/farooqzaman1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/farooqzaman1/subscriptions", "organizations_url": "https://api.github.com/users/farooqzaman1/orgs", "repos_url": "https://api.github.com/users/farooqzaman1/repos", "events_url": "https://api.github.com/users/farooqzaman1/events{/privacy}", "received_events_url": "https://api.github.com/users/farooqzaman1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! `rename_column` has been added recently and will be available in the next release" ]
1,614,672,109,000
1,614,690,483,000
null
NONE
null
null
null
'Dataset' object has no attribute 'rename_column'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1972/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1971/comments
https://api.github.com/repos/huggingface/datasets/issues/1971/events
https://github.com/huggingface/datasets/pull/1971
819,714,231
MDExOlB1bGxSZXF1ZXN0NTgyNzgyNTU0
1,971
Fix ArrowWriter closes stream at exit
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Oh nice thanks for adding the context manager ! All the streams and RecordBatchWriter will be properly closed now. Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nNot sure about the error, it looks like a process crashed silently.\r\nLet me take a look", "> Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nExactly! On Windows, you got:\r\n> PermissionError: [WinError 32] The process cannot access the file because it is being used by another process\r\n\r\nwhen trying to access the unclosed `stream` file, e.g. by `with incomplete_dir(self._cache_dir) as tmp_data_dir`: `shutil.rmtree(tmp_dir)`\r\n\r\nThe reason is: https://docs.python.org/3/library/os.html#os.remove\r\n\r\n> On Windows, attempting to remove a file that is in use causes an exception to be raised; on Unix, the directory entry is removed but the storage allocated to the file is not made available until the original file is no longer in use.\r\n\r\n\r\n", "The test passes on my windows. This was probably a circleCI issue. I re-ran the circleCI tests", "NICE! It passed!", "Maybe you can merge master into this branch and check the CI before merging ?", "@lhoestq done! ;)", "Thanks ! merging" ]
1,614,669,154,000
1,615,394,217,000
1,615,394,217,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1971", "html_url": "https://github.com/huggingface/datasets/pull/1971", "diff_url": "https://github.com/huggingface/datasets/pull/1971.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1971.patch", "merged_at": 1615394216000 }
Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method. Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` resource at exit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1971/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1970/comments
https://api.github.com/repos/huggingface/datasets/issues/1970/events
https://github.com/huggingface/datasets/pull/1970
819,500,620
MDExOlB1bGxSZXF1ZXN0NTgyNjAzMzEw
1,970
Fixing the URL filtering for bad MLSUM examples in GEM
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,614,648,178,000
1,614,655,146,000
1,614,650,493,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1970", "html_url": "https://github.com/huggingface/datasets/pull/1970", "diff_url": "https://github.com/huggingface/datasets/pull/1970.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1970.patch", "merged_at": 1614650493000 }
This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r cc @sebastianGehrmann
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1970/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1967/comments
https://api.github.com/repos/huggingface/datasets/issues/1967/events
https://github.com/huggingface/datasets/pull/1967
819,129,568
MDExOlB1bGxSZXF1ZXN0NTgyMjc5OTEx
1,967
Add Turkish News Category Dataset - 270K - Lite Version
{ "login": "yavuzKomecoglu", "id": 5150963, "node_id": "MDQ6VXNlcjUxNTA5NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yavuzKomecoglu", "html_url": "https://github.com/yavuzKomecoglu", "followers_url": "https://api.github.com/users/yavuzKomecoglu/followers", "following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}", "gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}", "starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions", "organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs", "repos_url": "https://api.github.com/users/yavuzKomecoglu/repos", "events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}", "received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the change, merging now !" ]
1,614,622,919,000
1,614,705,900,000
1,614,705,900,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1967", "html_url": "https://github.com/huggingface/datasets/pull/1967", "diff_url": "https://github.com/huggingface/datasets/pull/1967.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1967.patch", "merged_at": 1614705900000 }
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol. This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1967/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1966/comments
https://api.github.com/repos/huggingface/datasets/issues/1966/events
https://github.com/huggingface/datasets/pull/1966
819,101,253
MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0
1,966
Fix metrics collision in separate multiprocessed experiments
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Since the failure was originally intermittent, there is no 100% telling that the problem is gone. \r\nBut if my artificial race condition setup https://github.com/huggingface/datasets/issues/1942#issuecomment-787124529 is to be the litmus test then the problem has been fixed, as with this PR branch that particular race condition is taken care of correctly.\r\n\r\nThank you for taking care of this, @lhoestq - locking can be very tricky to do right!" ]
1,614,620,718,000
1,614,690,345,000
1,614,690,344,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1966", "html_url": "https://github.com/huggingface/datasets/pull/1966", "diff_url": "https://github.com/huggingface/datasets/pull/1966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1966.patch", "merged_at": 1614690344000 }
As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup. Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing/reading/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it. To fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision. Finally I added missing tests for separate experiments in distributed setup.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1966/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1966/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1965/comments
https://api.github.com/repos/huggingface/datasets/issues/1965/events
https://github.com/huggingface/datasets/issues/1965
818,833,460
MDU6SXNzdWU4MTg4MzM0NjA=
1,965
Can we parallelized the add_faiss_index process over dataset shards ?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nAs far as I know not all faiss indexes can be computed in parallel and then merged. \r\nFor example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.\r\nMoreover faiss already works using multithreading to parallelize the workload over your different CPU cores. You can find more info [here](https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls#internal-threading)\r\nSo I feel like the gains we would get by implementing a parallel `add_faiss_index` would not be that important, but let me know what you think.\r\n", "Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards. \r\n\r\nThen I was thinking of can I calculate the indexes for each shard and combined them with **concatenate** before I save.", "@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning more cores but still running add_index in the main process is very fast. " ]
1,614,602,854,000
1,614,886,856,000
1,614,886,842,000
NONE
null
null
null
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ? I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1965/timeline
null
false