url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.2B
node_id
stringlengths
18
32
number
int64
1
4.12k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,649B
updated_at
int64
1,587B
1,649B
closed_at
int64
1,587B
1,649B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3619/comments
https://api.github.com/repos/huggingface/datasets/issues/3619/events
https://github.com/huggingface/datasets/pull/3619
1,112,611,415
PR_kwDODunzps4xfnCQ
3,619
fix meta in mls
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Feel free to merge @polinaeterna as soon as you got an approval from either @lhoestq , @albertvillanova or @mariosasko" ]
1,643,028,878,000
1,643,057,602,000
1,643,057,602,000
CONTRIBUTOR
null
`monolingual` value of `m ultilinguality` param in yaml meta was changed to `multilingual` :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3619/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3619", "html_url": "https://github.com/huggingface/datasets/pull/3619", "diff_url": "https://github.com/huggingface/datasets/pull/3619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3619.patch", "merged_at": 1643057601000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3618/comments
https://api.github.com/repos/huggingface/datasets/issues/3618/events
https://github.com/huggingface/datasets/issues/3618
1,112,123,365
I_kwDODunzps5CSafl
3,618
TIMIT Dataset not working with GPU
{ "login": "TheSeamau5", "id": 3227869, "node_id": "MDQ6VXNlcjMyMjc4Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/3227869?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheSeamau5", "html_url": "https://github.com/TheSeamau5", "followers_url": "https://api.github.com/users/TheSeamau5/followers", "following_url": "https://api.github.com/users/TheSeamau5/following{/other_user}", "gists_url": "https://api.github.com/users/TheSeamau5/gists{/gist_id}", "starred_url": "https://api.github.com/users/TheSeamau5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TheSeamau5/subscriptions", "organizations_url": "https://api.github.com/users/TheSeamau5/orgs", "repos_url": "https://api.github.com/users/TheSeamau5/repos", "events_url": "https://api.github.com/users/TheSeamau5/events{/privacy}", "received_events_url": "https://api.github.com/users/TheSeamau5/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! I think you should avoid calling `timit_train['audio']`. Indeed by doing so you're **loading all the audio column in memory**. This is problematic in your case because the TIMIT dataset is huge.\r\n\r\nIf you want to access the audio data of some samples, you should do this instead `timit_train[:10][\"train\"]` for example.\r\n\r\nOther than that, I'm not sure why you get a `TypeError: string indices must be integers`, do you have a code snippet that reproduces the issue that you can share here ?", "I get the same error when I try to do `timit_train[0]` or really any indexing into the whole thing. \r\n\r\nReally, that IS the code snippet that reproduces the issue. If you index into other fields like 'file' or whatever, it works. As soon as one of the fields you're looking into is 'audio', you get that issue. It's a weird issue and I suspect it's Sagemaker/environment related, maybe the mix of libraries and dependencies are not good. \r\n\r\n\r\nExample code snippet with issue. \r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntimit_train = load_dataset('timit_asr', split='train')\r\nprint(timit_train[0])\r\n```", "Ok I see ! From the error you got, it looks like the `value` encoded in the arrow file of the TIMIT dataset you loaded is a string instead of a dictionary with keys \"path\" and \"bytes\" but we don't support this since 1.18\r\n\r\nCan you try regenerating the dataset with `load_dataset('timit_asr', download_mode=\"force_redownload\")` please ? I think it should fix the issue." ]
1,642,994,763,000
1,643,289,471,000
null
NONE
null
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4dn.xlarge instance (corresponds to a Tesla T4 GPU). I don't know if the issue is GPU related or Python environment related because everything works when I work off of the CPU Optimized environment with a non-GPU instance. My code also works on Google Colab with a GPU instance. This issue is blocking because I cannot get the 'audio' column in any way due to this error, which means that I can't pass it to any functions. I later use the dataset.map function and that is where I originally noticed this error. ## Steps to reproduce the bug ```python from datasets import load_dataset timit_train = load_dataset('timit_asr', split='train') print(timit_train['audio']) ``` ## Expected results Expected to see inside the 'audio' column, which contains an 'array' nested field with the array data I actually need. ## Actual results Traceback ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-ceeac555e921> in <module> ----> 1 timit_train['audio'] /opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) 1917 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1918 return self._getitem( -> 1919 key, 1920 ) 1921 /opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1902 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1903 formatted_output = format_table( -> 1904 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1905 ) 1906 return formatted_output /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 529 python_formatter = PythonFormatter(features=None) 530 if format_columns is None: --> 531 return formatter(pa_table, query_type=query_type) 532 elif query_type == "column": 533 if key in format_columns: /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) 280 return self.format_row(pa_table) 281 elif query_type == "column": --> 282 return self.format_column(pa_table) 283 elif query_type == "batch": 284 return self.format_batch(pa_table) /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_column(self, pa_table) 315 column = self.python_arrow_extractor().extract_column(pa_table) 316 if self.decoded: --> 317 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) 318 return column 319 /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_column(self, column, column_name) 221 222 def decode_column(self, column: list, column_name: str) -> list: --> 223 return self.features.decode_column(column, column_name) if self.features else column 224 225 def decode_batch(self, batch: dict) -> dict: /opt/conda/lib/python3.6/site-packages/datasets/features/features.py in decode_column(self, column, column_name) 1337 return ( 1338 [self[column_name].decode_example(value) if value is not None else None for value in column] -> 1339 if self._column_requires_decoding[column_name] 1340 else column 1341 ) /opt/conda/lib/python3.6/site-packages/datasets/features/features.py in <listcomp>(.0) 1336 """ 1337 return ( -> 1338 [self[column_name].decode_example(value) if value is not None else None for value in column] 1339 if self._column_requires_decoding[column_name] 1340 else column /opt/conda/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value) 85 dict 86 """ ---> 87 path, file = (value["path"], BytesIO(value["bytes"])) if value["bytes"] is not None else (value["path"], None) 88 if path is None and file is None: 89 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.") TypeError: string indices must be integers ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: Linux-4.14.256-197.484.amzn2.x86_64-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3618/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3617/comments
https://api.github.com/repos/huggingface/datasets/issues/3617/events
https://github.com/huggingface/datasets/pull/3617
1,111,938,691
PR_kwDODunzps4xdb8K
3,617
PR for the CFPB Consumer Complaints dataset
{ "login": "kayvane1", "id": 42403093, "node_id": "MDQ6VXNlcjQyNDAzMDkz", "avatar_url": "https://avatars.githubusercontent.com/u/42403093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kayvane1", "html_url": "https://github.com/kayvane1", "followers_url": "https://api.github.com/users/kayvane1/followers", "following_url": "https://api.github.com/users/kayvane1/following{/other_user}", "gists_url": "https://api.github.com/users/kayvane1/gists{/gist_id}", "starred_url": "https://api.github.com/users/kayvane1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kayvane1/subscriptions", "organizations_url": "https://api.github.com/users/kayvane1/orgs", "repos_url": "https://api.github.com/users/kayvane1/repos", "events_url": "https://api.github.com/users/kayvane1/events{/privacy}", "received_events_url": "https://api.github.com/users/kayvane1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Nice ! Thanks for adding this dataset :)\n> \n> \n> \n> I left a few comments:\n\nThanks!\n\nI'd be interested in contributing to the core codebase - I had to go down the custom loading approach because I couldn't pull this dataset in using the load_dataset() method. Using either the json or csv files available for this dataset as it was erroring. \n\nI'll rerun it and share the errors and try debug", "Hey @lhoestq ,\r\n\r\nWhen I use this dataset as part of my project, I'm using this method\r\n\r\n`text_dataset = text_dataset['train'].train_test_split(test_size=0.2)`\r\n\r\nto create a train and test split as this dataset doesn't have one. \r\n\r\nCan I add this directly in the script itself somehow, or is it better to give users the flexibility to slice and split their datasets after loading?", "> I'd be interested in contributing to the core codebase - I had to go down the custom loading approach because I couldn't pull this dataset in using the load_dataset() method. Using either the json or csv files available for this dataset as it was erroring.\r\n>\r\n> I'll rerun it and share the errors and try debug\r\n\r\nCool ! Let me know if you have questions or if I can help :)\r\n\r\n> Can I add this directly in the script itself somehow, or is it better to give users the flexibility to slice and split their datasets after loading?\r\n\r\nUsually we let the users the flexibility to split the datasets themselves (unless the dataset is already split, or if there is already a standard way to split it in the papers that use it)", "Thanks Quentin!\r\nAll okay to merge now?", "Thanks for the feedback Quentin and Mario - implemented all changes :)\r\n![Screenshot 2022-01-31 at 23 11 20](https://user-images.githubusercontent.com/42403093/151889262-30737feb-ac9c-4c5a-9326-9812db1d05bc.png)\r\n", "Hey @lhoestq / @mariosasko \r\nAny other changes required to merge? 🤗", "Hi ! Thanks and sorry for the late response \r\n\r\nIt looks very good ! The CI is still failing because it can't file the dummy_data.zip file, you can fix that by moving `datasets/consumer-finance-complaints/dummy/1.0.0/dummy_data.zip` to `datasets/consumer-finance-complaints/dummy/0.0.0/dummy_data.zip` and it should be all good !", "@lhoestq - hopefully that should do it!\r\n" ]
1,642,960,032,000
1,644,268,111,000
1,644,268,111,000
CONTRIBUTOR
null
Think I followed all the steps but please let me know if anything needs changing or any improvements I can make to the code quality
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3617/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3617/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3617", "html_url": "https://github.com/huggingface/datasets/pull/3617", "diff_url": "https://github.com/huggingface/datasets/pull/3617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3617.patch", "merged_at": 1644268111000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3616/comments
https://api.github.com/repos/huggingface/datasets/issues/3616/events
https://github.com/huggingface/datasets/pull/3616
1,111,587,861
PR_kwDODunzps4xcZMD
3,616
Make streamable the BnL Historical Newspapers dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,863,156,000
1,643,983,523,000
1,643,983,521,000
MEMBER
null
I've refactored the code in order to make the dataset streamable and to avoid it takes too long: - I've used `iter_files` Close #3615
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3616/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3616/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3616", "html_url": "https://github.com/huggingface/datasets/pull/3616", "diff_url": "https://github.com/huggingface/datasets/pull/3616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3616.patch", "merged_at": 1643983521000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3615/comments
https://api.github.com/repos/huggingface/datasets/issues/3615/events
https://github.com/huggingface/datasets/issues/3615
1,111,576,876
I_kwDODunzps5CQVEs
3,615
Dataset BnL Historical Newspapers does not work in streaming mode
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "@albertvillanova let me know if there is anything I can do to help with this. I had a quick look at the code again and though I could try the following changes:\r\n- use `download` instead of `download_and_extract`\r\nhttps://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bnl_newspapers/bnl_newspapers.py#L136\r\n- swith to using `iter_archive` to loop through downloaded data to replace\r\nhttps://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bnl_newspapers/bnl_newspapers.py#L159\r\n\r\nLet me know if it's useful for me to try and make those changes. ", "Thanks @davanstrien.\r\n\r\nI have already been working on it so that it can be used in the BigScience workshop.\r\n\r\nI agree that the `rglob()` is not efficient in this case.\r\n\r\nI tried different solutions without success:\r\n- `iter_archive` cannot be used in this case because it does not support ZIP files yet\r\n\r\nFinally I have used `iter_files()`.", "I see this is fixed now 🙂. I also picked up a few other tips from your redactors so hopefully my next attempts will support streaming from the start. " ]
1,642,860,779,000
1,643,983,521,000
1,643,983,521,000
MEMBER
null
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3615/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3614/comments
https://api.github.com/repos/huggingface/datasets/issues/3614/events
https://github.com/huggingface/datasets/pull/3614
1,110,736,657
PR_kwDODunzps4xZdCe
3,614
Minor fixes
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,787,324,000
1,643,028,349,000
1,643,028,349,000
CONTRIBUTOR
null
This PR: * adds "desc" to the `ignore_kwargs` list in `Dataset.filter` * fixes the default value of `id` in `DatasetDict.prepare_for_task`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3614/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3614", "html_url": "https://github.com/huggingface/datasets/pull/3614", "diff_url": "https://github.com/huggingface/datasets/pull/3614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3614.patch", "merged_at": 1643028349000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3613/comments
https://api.github.com/repos/huggingface/datasets/issues/3613/events
https://github.com/huggingface/datasets/issues/3613
1,110,684,015
I_kwDODunzps5CM7Fv
3,613
Files not updating in dataset viewer
{ "login": "abidlabs", "id": 1778297, "node_id": "MDQ6VXNlcjE3NzgyOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abidlabs", "html_url": "https://github.com/abidlabs", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "repos_url": "https://api.github.com/users/abidlabs/repos", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "Yes. The jobs queue is full right now, following an upgrade... Back to normality in the next hours hopefully. I'll look at your datasets to be sure the dataset viewer works as expected on them.", "Should have been fixed now." ]
1,642,783,640,000
1,642,839,193,000
1,642,839,193,000
MEMBER
null
## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is reading a cached version of the dataset and it is not updating to reflect new files that are added to the dataset. I get this error: ![image](https://user-images.githubusercontent.com/1778297/150566660-30dc0dcd-18fd-4471-b70c-7c4bdc6a23c6.png) Am I the one who added this dataset? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3613/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3612/comments
https://api.github.com/repos/huggingface/datasets/issues/3612/events
https://github.com/huggingface/datasets/pull/3612
1,110,506,466
PR_kwDODunzps4xYsvS
3,612
wikifix
{ "login": "apergo-ai", "id": 68908804, "node_id": "MDQ6VXNlcjY4OTA4ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apergo-ai", "html_url": "https://github.com/apergo-ai", "followers_url": "https://api.github.com/users/apergo-ai/followers", "following_url": "https://api.github.com/users/apergo-ai/following{/other_user}", "gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions", "organizations_url": "https://api.github.com/users/apergo-ai/orgs", "repos_url": "https://api.github.com/users/apergo-ai/repos", "events_url": "https://api.github.com/users/apergo-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/apergo-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "tests fail because of dataset_infos.json isn't updated. Unfortunately, I cannot get the datasets-cli locally to execute without error. Would need to troubleshoot, what's missing. Maybe someone else can pick up the stick. ", "Hi ! If we change the default date to the latest one, users won't be able to load the \"big\" languages like english anymore, because it requires an Apache Beam runtime to process them. On the contrary, the old data 20200501 has been processed by Hugging Face so that users don't need to run Apache Beam stuff.\r\n\r\nTherefore I'm in favor of not changing the default date until we have processed the latest versions of wikipedia.\r\n\r\nUsers that want to load other languages or that can use Apache Beam can still pass the `language` and `date` parameter to `load_dataset` if they want anyway:\r\n```python\r\nload_dataset(\"wikipedia\", language=\"fr\", date=\"20220120\")\r\n```", "in that case you can close the PR", "Ok thanks !\r\n\r\n(oh I I just noticed that the dataset card is missing the documentation regarding the language and date parameters, let me add it)" ]
1,642,773,911,000
1,643,911,096,000
1,643,911,096,000
CONTRIBUTOR
null
This should get the wikipedia dataloading script back up and running - at least I hope so (tested with language ff and ii)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3612/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3612", "html_url": "https://github.com/huggingface/datasets/pull/3612", "diff_url": "https://github.com/huggingface/datasets/pull/3612.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3612.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3611/comments
https://api.github.com/repos/huggingface/datasets/issues/3611/events
https://github.com/huggingface/datasets/issues/3611
1,110,399,096
I_kwDODunzps5CL1h4
3,611
Indexing bug after dataset.select()
{ "login": "kamalkraj", "id": 17096858, "node_id": "MDQ6VXNlcjE3MDk2ODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamalkraj", "html_url": "https://github.com/kamalkraj", "followers_url": "https://api.github.com/users/kamalkraj/followers", "following_url": "https://api.github.com/users/kamalkraj/following{/other_user}", "gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions", "organizations_url": "https://api.github.com/users/kamalkraj/orgs", "repos_url": "https://api.github.com/users/kamalkraj/repos", "events_url": "https://api.github.com/users/kamalkraj/events{/privacy}", "received_events_url": "https://api.github.com/users/kamalkraj/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! Thanks for reporting! I've opened a PR with the fix." ]
1,642,766,970,000
1,643,307,382,000
1,643,307,382,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. Dataset indexing is not working as expected after `dataset.select(range(100))` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } task_name = "sst2" raw_datasets = datasets.load_dataset("glue", task_name) train_dataset = raw_datasets["train"] print("before select: ",train_dataset[-2:]) # before select: {'sentence': ['a patient viewer ', 'this new jangle of noise , mayhem and stupidity must be a serious contender for the title . '], 'label': [1, 0], 'idx': [67347, 67348]} train_dataset = train_dataset.select(range(100)) print("after select: ",train_dataset[-2:]) # after select: {'sentence': [], 'label': [], 'idx': []} ``` link to colab: https://colab.research.google.com/drive/1LngeRC9f0jE7eSQ4Kh1cIeb411lRXQD-?usp=sharing ## Expected results A clear and concise description of the expected results. showing 98, 99 index data ## Actual results Specify the actual results or traceback. empty ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3611/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3610/comments
https://api.github.com/repos/huggingface/datasets/issues/3610/events
https://github.com/huggingface/datasets/issues/3610
1,109,777,314
I_kwDODunzps5CJdui
3,610
Checksum error when trying to load amazon_review dataset
{ "login": "rifoag", "id": 32415171, "node_id": "MDQ6VXNlcjMyNDE1MTcx", "avatar_url": "https://avatars.githubusercontent.com/u/32415171?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rifoag", "html_url": "https://github.com/rifoag", "followers_url": "https://api.github.com/users/rifoag/followers", "following_url": "https://api.github.com/users/rifoag/following{/other_user}", "gists_url": "https://api.github.com/users/rifoag/gists{/gist_id}", "starred_url": "https://api.github.com/users/rifoag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rifoag/subscriptions", "organizations_url": "https://api.github.com/users/rifoag/orgs", "repos_url": "https://api.github.com/users/rifoag/repos", "events_url": "https://api.github.com/users/rifoag/events{/privacy}", "received_events_url": "https://api.github.com/users/rifoag/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "It is solved now" ]
1,642,713,632,000
1,642,771,351,000
1,642,771,351,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug I am getting the issue when trying to load dataset using ``` dataset = load_dataset("amazon_polarity") ``` ## Expected results dataset loaded ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-3-b4758ba980ae> in <module>() ----> 1 dataset = load_dataset("amazon_polarity") 2 dataset.set_format(type='pandas') 3 content_series = dataset['train']['content'] 4 label_series = dataset['train']['label'] 5 df = pd.concat([content_series, label_series], axis=1) 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Google colab - Python version: 3.7.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3610/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3609/comments
https://api.github.com/repos/huggingface/datasets/issues/3609/events
https://github.com/huggingface/datasets/pull/3609
1,109,579,112
PR_kwDODunzps4xVrsG
3,609
Fixes to pubmed dataset download function
{ "login": "spacemanidol", "id": 3886120, "node_id": "MDQ6VXNlcjM4ODYxMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/3886120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spacemanidol", "html_url": "https://github.com/spacemanidol", "followers_url": "https://api.github.com/users/spacemanidol/followers", "following_url": "https://api.github.com/users/spacemanidol/following{/other_user}", "gists_url": "https://api.github.com/users/spacemanidol/gists{/gist_id}", "starred_url": "https://api.github.com/users/spacemanidol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spacemanidol/subscriptions", "organizations_url": "https://api.github.com/users/spacemanidol/orgs", "repos_url": "https://api.github.com/users/spacemanidol/repos", "events_url": "https://api.github.com/users/spacemanidol/events{/privacy}", "received_events_url": "https://api.github.com/users/spacemanidol/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! I think we can simply add a new configuration for the 2022 data instead of replacing them.\r\nYou can add the new configuration here:\r\n```python\r\n BUILDER_CONFIGS = [\r\n datasets.BuilderConfig(name=\"2021\", description=\"The 2021 annual record\", version=datasets.Version(\"1.0.0\")),\r\n datasets.BuilderConfig(name=\"2022\", description=\"The 2022 annual record\", version=datasets.Version(\"1.0.0\")),\r\n ]\r\n```\r\n\r\nAnd we can have the URLs for these two versions this way:\r\n```python\r\n_URLs = {\r\n \"2021\": f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n{i:04d}.xml.gz\" for i in range(1, 1063)],\r\n \"2022\": f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1114)]\r\n}\r\n```\r\nand depending on the configuration name (you can get it with `self.config.name`) we can pick the URLs of 2021 or the ones of 2022 and pass them to the `dl_manager` in `_split_generators`\r\n\r\nFeel free to ping me if you have questions or if I can help !", "Hi @spacemanidol, thanks for your contribution.\r\n\r\nThe update of the PubMed dataset URL (besides the update of the corresponding metadata and the dummy data) was already merged to master branch in this other PR:\r\n- #3692 \r\n\r\nI'm closing this PR then.\r\n\r\n@lhoestq please take into account that 2021 data is no longer accessible: every year PubMed releases the baseline data (containing all previous data until that year) and from that on, they release daily updates. ", "> @lhoestq please take into account that 2021 data is no longer accessible: every year PubMed releases the baseline data (containing all previous data until that year) and from that on, they release daily updates.\r\n\r\nOh ok I didn't know, thanks" ]
1,642,699,895,000
1,646,324,332,000
1,646,317,415,000
NONE
null
Pubmed has updated its settings for 2022 and thus existing download script does not work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3609/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3609", "html_url": "https://github.com/huggingface/datasets/pull/3609", "diff_url": "https://github.com/huggingface/datasets/pull/3609.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3609.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3608/comments
https://api.github.com/repos/huggingface/datasets/issues/3608/events
https://github.com/huggingface/datasets/issues/3608
1,109,310,981
I_kwDODunzps5CHr4F
3,608
Add support for continuous metrics (RMSE, MAE)
{ "login": "ck37", "id": 50770, "node_id": "MDQ6VXNlcjUwNzcw", "avatar_url": "https://avatars.githubusercontent.com/u/50770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ck37", "html_url": "https://github.com/ck37", "followers_url": "https://api.github.com/users/ck37/followers", "following_url": "https://api.github.com/users/ck37/following{/other_user}", "gists_url": "https://api.github.com/users/ck37/gists{/gist_id}", "starred_url": "https://api.github.com/users/ck37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ck37/subscriptions", "organizations_url": "https://api.github.com/users/ck37/orgs", "repos_url": "https://api.github.com/users/ck37/repos", "events_url": "https://api.github.com/users/ck37/events{/privacy}", "received_events_url": "https://api.github.com/users/ck37/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hey @ck37 \r\n\r\nYou can always use a custom metric as explained [in this guide from HF](https://huggingface.co/docs/datasets/master/loading_metrics.html#using-a-custom-metric-script).\r\n\r\nIf this issue needs to be contributed to (for enhancing the metric API) I think [this link](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html) would be helpful for the `MAE` metric.", "You can use a local metric script just by providing its path instead of the usual shortcut name ", "#self-assign I have starting working on this issue to enhance the metric API." ]
1,642,685,736,000
1,646,846,300,000
1,646,846,300,000
NONE
null
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP models conduct regression rather than classification, so binary metrics are not relevant. The only continuous metrics available at https://huggingface.co/metrics are pearson & spearman correlation, which don't ensure that the prediction is on the same scale as the outcome. **Describe the solution you'd like** I would like to be able to tag our models on the Hub with the following metrics: - RMSE - MAE **Describe alternatives you've considered** I don't know if there are any alternatives. **Additional context** Our preprint is available here: https://arxiv.org/abs/2009.10277 . We are making it available for use in Jigsaw's Toxic Severity Rating Kaggle competition: https://www.kaggle.com/c/jigsaw-toxic-severity-rating/overview . I have our first model uploaded to the Hub at https://huggingface.co/ucberkeley-dlab/hate-measure-roberta-large Thanks, Chris
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3608/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3607/comments
https://api.github.com/repos/huggingface/datasets/issues/3607/events
https://github.com/huggingface/datasets/pull/3607
1,109,218,370
PR_kwDODunzps4xUgrR
3,607
Add MIT Scene Parsing Benchmark
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,680,187,000
1,645,188,661,000
1,645,188,660,000
CONTRIBUTOR
null
Add MIT Scene Parsing Benchmark (a subset of ADE20k). TODOs: * [x] add dummy data * [x] add dataset card * [x] generate `dataset_info.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3607/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3607", "html_url": "https://github.com/huggingface/datasets/pull/3607", "diff_url": "https://github.com/huggingface/datasets/pull/3607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3607.patch", "merged_at": 1645188660000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3606/comments
https://api.github.com/repos/huggingface/datasets/issues/3606/events
https://github.com/huggingface/datasets/issues/3606
1,108,918,701
I_kwDODunzps5CGMGt
3,606
audio column not saved correctly after resampling
{ "login": "laphang", "id": 24724502, "node_id": "MDQ6VXNlcjI0NzI0NTAy", "avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laphang", "html_url": "https://github.com/laphang", "followers_url": "https://api.github.com/users/laphang/followers", "following_url": "https://api.github.com/users/laphang/following{/other_user}", "gists_url": "https://api.github.com/users/laphang/gists{/gist_id}", "starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laphang/subscriptions", "organizations_url": "https://api.github.com/users/laphang/orgs", "repos_url": "https://api.github.com/users/laphang/repos", "events_url": "https://api.github.com/users/laphang/events{/privacy}", "received_events_url": "https://api.github.com/users/laphang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! We just released a new version of `datasets` that should fix this.\r\n\r\nI tested resampling and using save/load_from_disk afterwards and it seems to be fixed now", "Hi @lhoestq, \r\n\r\nJust tested the latest datasets version, and confirming that this is fixed for me. \r\n\r\nThanks!", "Also, just an FYI, data that I had saved (with save_to_disk) previously from common voice using datasets==1.17.0 now give the error below when loading (with load_from disk) using datasets==1.18.0. \r\n\r\nHowever, when starting fresh using load_dataset, then doing the resampling, the save/load_from disk worked fine. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<timed exec> in <module>\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1747 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1748 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):\r\n-> 1749 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1750 else:\r\n 1751 raise FileNotFoundError(\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in load_from_disk(dataset_dict_path, fs, keep_in_memory)\r\n 769 else Path(dest_dataset_dict_path, k).as_posix()\r\n 770 )\r\n--> 771 dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n 772 return dataset_dict\r\n 773 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1118 info=dataset_info,\r\n 1119 split=split,\r\n-> 1120 fingerprint=state[\"_fingerprint\"],\r\n 1121 )\r\n 1122 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)\r\n 655 if self.info.features.type != inferred_features.type:\r\n 656 raise ValueError(\r\n--> 657 f\"External features info don't match the dataset:\\nGot\\n{self.info.features}\\nwith type\\n{self.info.features.type}\\n\\nbut expected something like\\n{inferred_features}\\nwith type\\n{inferred_features.type}\"\r\n 658 )\r\n 659 \r\n\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=48000, mono=True, id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<bytes: binary, path: string>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64>\r\n\r\nbut expected something like\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<path: string, bytes: binary>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64> \r\n```" ]
1,642,660,630,000
1,642,902,061,000
1,642,901,054,000
NONE
null
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected results I expected that after saving the data, and then loading it back in, the audio column has the correct dataset.Audio type (i.e. same as before saving it) {'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)} ## Actual results Audio column does not have the right type {'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)} ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: linux - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3606/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3605/comments
https://api.github.com/repos/huggingface/datasets/issues/3605/events
https://github.com/huggingface/datasets/pull/3605
1,108,738,561
PR_kwDODunzps4xS9rX
3,605
Adding Turkic X-WMT evaluation set for machine translation
{ "login": "mirzakhalov", "id": 26018417, "node_id": "MDQ6VXNlcjI2MDE4NDE3", "avatar_url": "https://avatars.githubusercontent.com/u/26018417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mirzakhalov", "html_url": "https://github.com/mirzakhalov", "followers_url": "https://api.github.com/users/mirzakhalov/followers", "following_url": "https://api.github.com/users/mirzakhalov/following{/other_user}", "gists_url": "https://api.github.com/users/mirzakhalov/gists{/gist_id}", "starred_url": "https://api.github.com/users/mirzakhalov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mirzakhalov/subscriptions", "organizations_url": "https://api.github.com/users/mirzakhalov/orgs", "repos_url": "https://api.github.com/users/mirzakhalov/repos", "events_url": "https://api.github.com/users/mirzakhalov/events{/privacy}", "received_events_url": "https://api.github.com/users/mirzakhalov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "hi! Thank you for all the comments! I believe I addressed them all. Let me know if there is anything else", "Hi there! I was wondering if there is anything else to change before this can be merged", "@lhoestq Hi! Just a gentle reminder about the steps to merge this one! ", "Thanks for the heads up ! I think I fixed the last issue with the YAML tags", "The CI failure is unrelated to this PR and fixed on master, let's merge :)\r\n\r\nThanks a lot !" ]
1,642,642,829,000
1,643,622,657,000
1,643,622,657,000
CONTRIBUTOR
null
This dataset is a human-translated evaluation set for MT crowdsourced and provided by the [Turkic Interlingua ](turkic-interlingua.org) community. It contains eval sets for 8 Turkic languages covering 88 language directions. Languages being covered are: Azerbaijani (az) Bashkir (ba) English (en) Karakalpak (kaa) Kazakh (kk) Kirghiz (ky) Russian (ru) Turkish (tr) Sakha (sah) Uzbek (uz) More info about the corpus is here: [https://github.com/turkic-interlingua/til-mt/tree/master/xwmt](https://github.com/turkic-interlingua/til-mt/tree/master/xwmt) A paper describing the test set is here: [https://arxiv.org/abs/2109.04593](https://arxiv.org/abs/2109.04593)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3605/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3605", "html_url": "https://github.com/huggingface/datasets/pull/3605", "diff_url": "https://github.com/huggingface/datasets/pull/3605.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3605.patch", "merged_at": 1643622657000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3604/comments
https://api.github.com/repos/huggingface/datasets/issues/3604/events
https://github.com/huggingface/datasets/issues/3604
1,108,477,316
I_kwDODunzps5CEgWE
3,604
Dataset Viewer not showing Previews for Private Datasets
{ "login": "abidlabs", "id": 1778297, "node_id": "MDQ6VXNlcjE3NzgyOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abidlabs", "html_url": "https://github.com/abidlabs", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "repos_url": "https://api.github.com/users/abidlabs/repos", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Sure, it's on the roadmap." ]
1,642,620,566,000
1,644,828,295,000
null
MEMBER
null
## Dataset viewer issue for 'abidlabs/test-audio-13' It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private datasets. ![image](https://user-images.githubusercontent.com/1778297/150200515-93ff1545-11fd-4793-be64-6bed3cd895e2.png) **Link:** [1] https://huggingface.co/datasets/abidlabs/test-audio-13 **Am I the one who added this dataset?** Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3604/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3603/comments
https://api.github.com/repos/huggingface/datasets/issues/3603/events
https://github.com/huggingface/datasets/pull/3603
1,108,392,141
PR_kwDODunzps4xR1ih
3,603
Add British Library books dataset
{ "login": "davanstrien", "id": 8995957, "node_id": "MDQ6VXNlcjg5OTU5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davanstrien", "html_url": "https://github.com/davanstrien", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "repos_url": "https://api.github.com/users/davanstrien/repos", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for all the help and suggestions\r\n\r\n> Since the dataset has a very specific structure it might not be that easy so feel free to ping me if you have questions or if I can help !\r\n\r\nI did get a little stuck here! So far I have created directories for each config i.e:\r\n\r\n`datasets/datasets/blbooks/dummy/1700_1799/1.0.2/dummy_data.zip` \r\n\r\nI have then added two examples of the `jsonl.gz` files that are in the underlying dataset to each dummy_data directory.This fails the test using local files. \r\n\r\nSince \r\n\r\n```python\r\ndef _generate_examples(self, data_dirs):\r\n```\r\n\r\ntakes as input `data_dirs` which is a list of `iter_dirs` do I need to put the dummy files inside another directory? i.e. \r\n\r\n`datasets/datasets/blbooks/dummy/1700_1799/1.0.2/dummy_data/1700/00.jsonl.gz` \r\n\r\n\r\n ", "I think I managed to create the dummy data :)\r\n\r\nI think everything is good now, if you don't have other changes to do, please mark your PR as \"ready for review\" and ping me!", "> I think I managed to create the dummy data :)\r\n\r\nThanks so much for that!\r\n\r\n> I think everything is good now, if you don't have other changes to do, please mark your PR as \"ready for review\" and ping me!\r\n\r\nThink it is ready to merge from my end @lhoestq. ", "The CI failure on windows is unrelated to your PR and fixed on `master`, we can ignore it" ]
1,642,614,785,000
1,643,649,771,000
1,643,648,509,000
CONTRIBUTOR
null
This pull request adds a dataset of text from digitised (primarily 19th Century) books from the British Library. This collection has previously been used for training language models, e.g. https://github.com/dbmdz/clef-hipe/blob/main/hlms.md. It would be nice to make this dataset more accessible for others to use through datasets. This is still a WIP but I wanted to get some initial feedback in particular; I wanted to check: - I am handling the use of `iter_archive` correctly - I intend to ensure that `dl_manager.download` gets the complete list of URLs to download upfront, so the progress bar knows how much is left to download and then to pass through the `gen_kwargs` a list of downloaded zip archives wrapped in `iter_archive`. I am unsure if there is a more elegant approach for this? - the number of configs: I have aimed to keep this limited - there are a lot of URLs covering the entire dataset, but I have tried to base the configs on what I believe the majority of people will want to they are not presented with too many options - I am happy to hear suggestions for changing this If there are other glaring omissions or mistakes, I'd be happy to hear them. If this approach seems sensible in general, I will finish all the remaining TODOs, generate dummy_data, etc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3603/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3603/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3603", "html_url": "https://github.com/huggingface/datasets/pull/3603", "diff_url": "https://github.com/huggingface/datasets/pull/3603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3603.patch", "merged_at": 1643648509000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3602/comments
https://api.github.com/repos/huggingface/datasets/issues/3602/events
https://github.com/huggingface/datasets/pull/3602
1,108,247,870
PR_kwDODunzps4xRXVm
3,602
Update url for conll2003
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi. lhoestq \r\n\r\n![image](https://user-images.githubusercontent.com/21982975/150345097-154f2b1a-bb12-47af-bddf-40eec0a0dadb.png)\r\nWhat is the solution for it?\r\nyou can see it is still doesn't work here.\r\nhttps://colab.research.google.com/drive/1l52FGWuSaOaGYchit4CbmtUSuzNDx_Ok?usp=sharing\r\nThank you.\r\n", "For now you can specify `load_dataset(..., revision=\"master\")` to use the fix on `master`.\r\n\r\nWe'll also do a new release of `datasets` tomorrow I think" ]
1,642,606,504,000
1,642,695,783,000
1,642,607,033,000
MEMBER
null
Following https://github.com/huggingface/datasets/issues/3582 I'm changing the download URL of the conll2003 data files, since the previous host doesn't have the authorization to redistribute the data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3602/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3602", "html_url": "https://github.com/huggingface/datasets/pull/3602", "diff_url": "https://github.com/huggingface/datasets/pull/3602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3602.patch", "merged_at": 1642607033000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3601/comments
https://api.github.com/repos/huggingface/datasets/issues/3601/events
https://github.com/huggingface/datasets/pull/3601
1,108,207,131
PR_kwDODunzps4xROtF
3,601
Add conll2003 licensing
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,604,441,000
1,642,612,648,000
1,642,612,648,000
MEMBER
null
Following https://github.com/huggingface/datasets/issues/3582, this PR updates the licensing section of the CoNLL2003 dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3601/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3601", "html_url": "https://github.com/huggingface/datasets/pull/3601", "diff_url": "https://github.com/huggingface/datasets/pull/3601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3601.patch", "merged_at": 1642612648000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3600/comments
https://api.github.com/repos/huggingface/datasets/issues/3600/events
https://github.com/huggingface/datasets/pull/3600
1,108,131,878
PR_kwDODunzps4xQ-vt
3,600
Use old url for conll2003
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,600,609,000
1,642,601,788,000
1,642,601,788,000
MEMBER
null
As reported in https://github.com/huggingface/datasets/issues/3582 the CoNLL2003 data files are not available in the master branch of the repo that used to host them. For now we can use the URL from an older commit to access the data files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3600/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3600", "html_url": "https://github.com/huggingface/datasets/pull/3600", "diff_url": "https://github.com/huggingface/datasets/pull/3600.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3600.patch", "merged_at": 1642601788000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3599/comments
https://api.github.com/repos/huggingface/datasets/issues/3599/events
https://github.com/huggingface/datasets/issues/3599
1,108,111,607
I_kwDODunzps5CDHD3
3,599
The `add_column()` method does not work if used on dataset sliced with `select()`
{ "login": "ThGouzias", "id": 59422506, "node_id": "MDQ6VXNlcjU5NDIyNTA2", "avatar_url": "https://avatars.githubusercontent.com/u/59422506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ThGouzias", "html_url": "https://github.com/ThGouzias", "followers_url": "https://api.github.com/users/ThGouzias/followers", "following_url": "https://api.github.com/users/ThGouzias/following{/other_user}", "gists_url": "https://api.github.com/users/ThGouzias/gists{/gist_id}", "starred_url": "https://api.github.com/users/ThGouzias/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThGouzias/subscriptions", "organizations_url": "https://api.github.com/users/ThGouzias/orgs", "repos_url": "https://api.github.com/users/ThGouzias/repos", "events_url": "https://api.github.com/users/ThGouzias/events{/privacy}", "received_events_url": "https://api.github.com/users/ThGouzias/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "similar #3611 " ]
1,642,599,410,000
1,643,384,157,000
1,643,384,157,000
NONE
null
Hello, I posted this as a question on the forums ([here](https://discuss.huggingface.co/t/add-column-does-not-work-if-used-on-dataset-sliced-with-select/13893)): I have a dataset with 2000 entries > dataset = Dataset.from_dict({'colA': list(range(2000))}) and from which I want to extract the first one thousand rows, create a new dataset with these and also add a new column to it: > dataset2 = dataset.select(list(range(1000))) > final_dataset = dataset2.add_column('colB', list(range(1000))) This gives an error >ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000 So it looks like even though it is a dataset with 1000 rows, it "remembers" the shape of the one it was sliced from. ## Actual results ``` ArrowInvalid Traceback (most recent call last) <ipython-input-138-e806860f3ce3> in <module> ----> 1 final_dataset = dataset2.add_column('colB', list(range(1000))) ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 468 } 469 # apply actual function --> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 472 # re-apply format to the output ~/.local/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 3343 column_table = InMemoryTable.from_pydict({name: column}) 3344 # Concatenate tables horizontally -> 3345 table = ConcatenationTable.from_tables([self._data, column_table], axis=1) 3346 # Update features 3347 info = self.info.copy() ~/.local/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis) 729 table_blocks = to_blocks(table) 730 blocks = _extend_blocks(blocks, table_blocks, axis=axis) --> 731 return cls.from_blocks(blocks) 732 733 @property ~/.local/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks) 668 @classmethod 669 def from_blocks(cls, blocks: TableBlockContainer) -> "ConcatenationTable": --> 670 blocks = cls._consolidate_blocks(blocks) 671 if isinstance(blocks, TableBlock): 672 table = blocks ~/.local/lib/python3.8/site-packages/datasets/table.py in _consolidate_blocks(cls, blocks) 664 return cls._merge_blocks(blocks, axis=0) 665 else: --> 666 return cls._merge_blocks(blocks) 667 668 @classmethod ~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis) 650 merged_blocks += list(block_group) 651 else: # both --> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks] 653 if all(len(row_block) == 1 for row_block in merged_blocks): 654 merged_blocks = cls._merge_blocks( ~/.local/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 650 merged_blocks += list(block_group) 651 else: # both --> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks] 653 if all(len(row_block) == 1 for row_block in merged_blocks): 654 merged_blocks = cls._merge_blocks( ~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis) 647 for is_in_memory, block_group in groupby(blocks, key=lambda x: isinstance(x, InMemoryTable)): 648 if is_in_memory: --> 649 block_group = [InMemoryTable(cls._concat_blocks(list(block_group), axis=axis))] 650 merged_blocks += list(block_group) 651 else: # both ~/.local/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis) 626 else: 627 for name, col in zip(table.column_names, table.columns): --> 628 pa_table = pa_table.append_column(name, col) 629 return pa_table 630 else: ~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column() ~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column() ~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000 ``` A solution provided by @mariosasko is to use `dataset2.flatten_indices()` after the `select()` and before attempting to add the new column: > dataset = Dataset.from_dict({'colA': list(range(2000))}) > dataset2 = dataset.select(list(range(1000))) > dataset2 = dataset2.flatten_indices() > final_dataset = dataset2.add_column('colB', list(range(1000))) which works. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.2 (note: also checked with version 1.17.0, still the same error) - Platform: Ubuntu 20.04.3 - Python version: 3.8.10 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3599/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3599/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3598/comments
https://api.github.com/repos/huggingface/datasets/issues/3598/events
https://github.com/huggingface/datasets/issues/3598
1,108,107,199
I_kwDODunzps5CDF-_
3,598
Readme info not being parsed to show on Dataset card page
{ "login": "davidcanovas", "id": 79796807, "node_id": "MDQ6VXNlcjc5Nzk2ODA3", "avatar_url": "https://avatars.githubusercontent.com/u/79796807?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidcanovas", "html_url": "https://github.com/davidcanovas", "followers_url": "https://api.github.com/users/davidcanovas/followers", "following_url": "https://api.github.com/users/davidcanovas/following{/other_user}", "gists_url": "https://api.github.com/users/davidcanovas/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidcanovas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidcanovas/subscriptions", "organizations_url": "https://api.github.com/users/davidcanovas/orgs", "repos_url": "https://api.github.com/users/davidcanovas/repos", "events_url": "https://api.github.com/users/davidcanovas/events{/privacy}", "received_events_url": "https://api.github.com/users/davidcanovas/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "i suspect a markdown parsing error, @severo do you want to take a quick look at it when you have some time?", "# Problem\r\nThe issue seems to coming from the front matter of the README\r\n```---\r\nannotations_creators:\r\n- no-annotation\r\nlanguage_creators:\r\n- machine-generated\r\nlanguages:\r\n- 'ca'\r\n- 'de'\r\nlicenses:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- translation\r\npretty_name: Catalan-German aligned corpora to train NMT systems.\r\nsize_categories:\r\n- \"1M<n<10M\" \r\nsource_datasets:\r\n- extended|tilde_model\r\ntask_categories:\r\n- machine-translation\r\ntask_ids:\r\n- machine-translation\r\n---\r\n``` \r\n# Solution\r\nThe fix is to correctly style the README as explained [here](https://huggingface.co/docs/datasets/v1.12.0/dataset_card.html). I have also correctly parsed the font matter as shown below:\r\n```\r\n---\r\nannotations_creators: []\r\nlanguage_creators: [machine-generated]\r\nlanguages: ['ca', 'de']\r\nlicenses: []\r\nmultilinguality:\r\n- multilingual\r\npretty_name: 'Catalan-German aligned corpora to train NMT systems.'\r\nsize_categories: \r\n- 1M<n<10M\r\nsource_datasets: ['extended|tilde_model']\r\ntask_categories: ['machine-translation']\r\ntask_ids: ['machine-translation']\r\n---\r\n```\r\nYou can find the README for a sample dataset [here](https://huggingface.co/datasets/ritwikraha/Test)", "Thank you. It finally worked implementing your changes and leaving a white line between title and text in the description.", "Thanks, if this solves your issue, can you please close it?" ]
1,642,599,149,000
1,642,760,401,000
1,642,760,401,000
NONE
null
## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README file is this one: https://huggingface.co/datasets/softcatala/Tilde-MODEL-Catalan/blob/main/README.md ## Expected results README info should appear in the Dataset card page. ## Actual results Nothing is shown. However, labels are parsed and shown successfully.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3598/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3597/comments
https://api.github.com/repos/huggingface/datasets/issues/3597/events
https://github.com/huggingface/datasets/issues/3597
1,108,092,864
I_kwDODunzps5CDCfA
3,597
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
{ "login": "amitkml", "id": 49492030, "node_id": "MDQ6VXNlcjQ5NDkyMDMw", "avatar_url": "https://avatars.githubusercontent.com/u/49492030?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amitkml", "html_url": "https://github.com/amitkml", "followers_url": "https://api.github.com/users/amitkml/followers", "following_url": "https://api.github.com/users/amitkml/following{/other_user}", "gists_url": "https://api.github.com/users/amitkml/gists{/gist_id}", "starred_url": "https://api.github.com/users/amitkml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amitkml/subscriptions", "organizations_url": "https://api.github.com/users/amitkml/orgs", "repos_url": "https://api.github.com/users/amitkml/repos", "events_url": "https://api.github.com/users/amitkml/events{/privacy}", "received_events_url": "https://api.github.com/users/amitkml/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! The `cd` command in Jupyer/Colab needs to start with `%`, so this should work:\r\n```\r\n!git clone https://github.com/huggingface/datasets.git\r\n%cd datasets\r\n!pip install -e \".[streaming]\"\r\n```" ]
1,642,598,368,000
1,644,828,394,000
1,644,828,394,000
NONE
null
## Bug The install of streaming dataset is giving following error. ## Steps to reproduce the bug ```python ! git clone https://github.com/huggingface/datasets.git ! cd datasets ! pip install -e ".[streaming]" ``` ## Actual results Cloning into 'datasets'... remote: Enumerating objects: 50816, done. remote: Counting objects: 100% (2356/2356), done. remote: Compressing objects: 100% (1606/1606), done. remote: Total 50816 (delta 834), reused 1741 (delta 525), pack-reused 48460 Receiving objects: 100% (50816/50816), 72.47 MiB | 27.68 MiB/s, done. Resolving deltas: 100% (22541/22541), done. Checking out files: 100% (6722/6722), done. ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3597/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3596/comments
https://api.github.com/repos/huggingface/datasets/issues/3596/events
https://github.com/huggingface/datasets/issues/3596
1,107,345,338
I_kwDODunzps5CAL-6
3,596
Loss of cast `Image` feature on certain dataset method
{ "login": "davanstrien", "id": 8995957, "node_id": "MDQ6VXNlcjg5OTU5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davanstrien", "html_url": "https://github.com/davanstrien", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "repos_url": "https://api.github.com/users/davanstrien/repos", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.", "> Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.\r\n\r\nThanks, I'll keep an eye out for #3575 getting merged. I managed to use `push_to_hub` sucesfully with images when they were loaded via `map` - something like `ds.map(lambda example: {\"img\": load_image_function(example['fname']})`, this only pushed the images to the hub if the `load_image_function` return a PIL Image without the filename attribute though. I guess this might often be the prefered behaviour though. \r\n", "Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?", "> Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?\r\n\r\nThanks for checking. There is no longer an error when calling `select` but it appears the cast value isn't preserved. Before `select`\r\n\r\n```python\r\ndataset.features\r\n{'url': Image(id=None)}\r\n```\r\n\r\nafter select:\r\n```\r\n{'url': Value(dtype='string', id=None)}\r\n```\r\n\r\nUpdated Colab example [here](https://colab.research.google.com/gist/davanstrien/4e88f55a3675c279b5c2f64299ae5c6f/potential_casting_bug.ipynb) ", "Hmmm, if I re-run your google colab I'm getting the right type at the end:\r\n```\r\nsample.features\r\n# {'url': Image(id=None)}\r\n```", "Appolgies - I've just run again and also got this output. I have also sucesfully used the `push_to_hub` method. I think this is fixed now so will close this issue. ", "Fixed in #3575 " ]
1,642,538,641,000
1,642,788,448,000
1,642,788,448,000
CONTRIBUTOR
null
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a dataset which has had a column cast to an `Image`. I suspect this might be related to https://github.com/huggingface/datasets/pull/3556 but I don't believe that pull request fixes this issue. ## Steps to reproduce the bug An example of casting a url to an image followed by using the `select` method: ```python from datasets import Dataset from datasets import features url = "https://cf.ltkcdn.net/cats/images/std-lg/246866-1200x816-grey-white-kitten.webp" data_dict = {"url": [url]*2} dataset = Dataset.from_dict(data_dict) dataset = dataset.cast_column('url',features.Image()) sample = dataset.select([1]) ``` [example notebook](https://gist.github.com/davanstrien/06e53f4383c28ae77ce1b30d0eaf0d70#file-potential_casting_bug-ipynb) ## Expected results The cast value is maintained when further methods are applied to the dataset. ## Actual results ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-47f393bc2d0d> in <module>() ----> 1 sample = dataset.select([1]) 4 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 487 } 488 # apply actual function --> 489 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 490 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 491 # re-apply format to the output /usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 409 # Call actual function 410 --> 411 out = func(self, *args, **kwargs) 412 413 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 2772 ) 2773 else: -> 2774 return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint) 2775 2776 @transmit_format /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _new_dataset_with_indices(self, indices_cache_file_name, indices_buffer, fingerprint) 2688 split=self.split, 2689 indices_table=indices_table, -> 2690 fingerprint=fingerprint, 2691 ) 2692 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint) 664 if self.info.features.type != inferred_features.type: 665 raise ValueError( --> 666 f"External features info don't match the dataset:\nGot\n{self.info.features}\nwith type\n{self.info.features.type}\n\nbut expected something like\n{inferred_features}\nwith type\n{inferred_features.type}" 667 ) 668 ValueError: External features info don't match the dataset: Got {'url': Image(id=None)} with type struct<url: extension<arrow.py_extension_type<ImageExtensionType>>> but expected something like {'url': Value(dtype='string', id=None)} with type struct<url: string> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.1.dev0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3596/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3595/comments
https://api.github.com/repos/huggingface/datasets/issues/3595/events
https://github.com/huggingface/datasets/pull/3595
1,107,260,527
PR_kwDODunzps4xOIxH
3,595
Add ImageNet toy datasets from fastai
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,642,532,615,000
1,642,592,016,000
null
CONTRIBUTOR
null
Adds the ImageNet toy datasets from FastAI: Imagenette, Imagewoof and Imagewang. TODOs: * [ ] add dummy data * [ ] add dataset card * [ ] generate `dataset_info.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3595/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3595", "html_url": "https://github.com/huggingface/datasets/pull/3595", "diff_url": "https://github.com/huggingface/datasets/pull/3595.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3595.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3594/comments
https://api.github.com/repos/huggingface/datasets/issues/3594/events
https://github.com/huggingface/datasets/pull/3594
1,107,174,619
PR_kwDODunzps4xN3Kk
3,594
fix multiple language downloading in mC4
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The CI failure is unrelated to your PR and fixed on master, merging :)" ]
1,642,526,719,000
1,642,591,377,000
1,642,533,022,000
CONTRIBUTOR
null
If we try to access multiple languages of the [mC4 dataset](https://github.com/huggingface/datasets/tree/master/datasets/mc4), it will throw an error. For example, if we do ```python mc4_subset_two_langs = load_dataset("mc4", languages=["st", "su"]) ``` we got ``` FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/multilingual/c4-st+su.tfrecord-00000-of-00002.json.gz ``` Now it should work. Check it (from the root dir of a project): ```python mc4_subset_two_langs = load_dataset("./datasets/mc4/", languages=["st", "su"]) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3594/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3594", "html_url": "https://github.com/huggingface/datasets/pull/3594", "diff_url": "https://github.com/huggingface/datasets/pull/3594.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3594.patch", "merged_at": 1642533022000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3593/comments
https://api.github.com/repos/huggingface/datasets/issues/3593/events
https://github.com/huggingface/datasets/pull/3593
1,107,070,852
PR_kwDODunzps4xNhTu
3,593
Update README.md
{ "login": "borgr", "id": 6416600, "node_id": "MDQ6VXNlcjY0MTY2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borgr", "html_url": "https://github.com/borgr", "followers_url": "https://api.github.com/users/borgr/followers", "following_url": "https://api.github.com/users/borgr/following{/other_user}", "gists_url": "https://api.github.com/users/borgr/gists{/gist_id}", "starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borgr/subscriptions", "organizations_url": "https://api.github.com/users/borgr/orgs", "repos_url": "https://api.github.com/users/borgr/repos", "events_url": "https://api.github.com/users/borgr/events{/privacy}", "received_events_url": "https://api.github.com/users/borgr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,521,136,000
1,642,698,893,000
1,642,698,893,000
CONTRIBUTOR
null
Towards license of Tweet Eval parts
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3593/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3593", "html_url": "https://github.com/huggingface/datasets/pull/3593", "diff_url": "https://github.com/huggingface/datasets/pull/3593.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3593.patch", "merged_at": 1642698892000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3592/comments
https://api.github.com/repos/huggingface/datasets/issues/3592/events
https://github.com/huggingface/datasets/pull/3592
1,107,026,723
PR_kwDODunzps4xNYIW
3,592
Add QuickDraw dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,642,518,819,000
1,642,518,819,000
null
CONTRIBUTOR
null
Add the QuickDraw dataset. TODOs: * [ ] add dummy data * [ ] add dataset card * [ ] generate `dataset_info.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3592/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3592", "html_url": "https://github.com/huggingface/datasets/pull/3592", "diff_url": "https://github.com/huggingface/datasets/pull/3592.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3592.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3591/comments
https://api.github.com/repos/huggingface/datasets/issues/3591/events
https://github.com/huggingface/datasets/pull/3591
1,106,928,613
PR_kwDODunzps4xNDoB
3,591
Add support for time, date, duration, and decimal dtypes
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Is there a dataset which uses these four datatypes for tests purposes?\r\n", "@severo Not yet. I'll let you know if that changes." ]
1,642,513,565,000
1,643,653,774,000
1,642,700,253,000
CONTRIBUTOR
null
Add support for the pyarrow time (maps to `datetime.time` in python), date (maps to `datetime.time` in python), duration (maps to `datetime.timedelta` in python), and decimal (maps to `decimal.decimal` in python) dtypes. This should be helpful when writing scripts for time-series datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3591/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3591", "html_url": "https://github.com/huggingface/datasets/pull/3591", "diff_url": "https://github.com/huggingface/datasets/pull/3591.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3591.patch", "merged_at": 1642700253000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3590/comments
https://api.github.com/repos/huggingface/datasets/issues/3590/events
https://github.com/huggingface/datasets/pull/3590
1,106,784,860
PR_kwDODunzps4xMlGg
3,590
Update ANLI README.md
{ "login": "borgr", "id": 6416600, "node_id": "MDQ6VXNlcjY0MTY2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borgr", "html_url": "https://github.com/borgr", "followers_url": "https://api.github.com/users/borgr/followers", "following_url": "https://api.github.com/users/borgr/following{/other_user}", "gists_url": "https://api.github.com/users/borgr/gists{/gist_id}", "starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borgr/subscriptions", "organizations_url": "https://api.github.com/users/borgr/orgs", "repos_url": "https://api.github.com/users/borgr/repos", "events_url": "https://api.github.com/users/borgr/events{/privacy}", "received_events_url": "https://api.github.com/users/borgr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,504,973,000
1,642,697,921,000
1,642,697,921,000
CONTRIBUTOR
null
Update license and little things concerning ANLI
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3590/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3590", "html_url": "https://github.com/huggingface/datasets/pull/3590", "diff_url": "https://github.com/huggingface/datasets/pull/3590.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3590.patch", "merged_at": 1642697921000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3589/comments
https://api.github.com/repos/huggingface/datasets/issues/3589/events
https://github.com/huggingface/datasets/pull/3589
1,106,766,114
PR_kwDODunzps4xMhGp
3,589
Pin torchmetrics to fix the COMET test
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,503,829,000
1,642,503,896,000
1,642,503,895,000
MEMBER
null
Torchmetrics 0.7.0 got released and has issues with `transformers` (see https://github.com/PyTorchLightning/metrics/issues/770) I'm pinning it to 0.6.0 in the CI, since 0.7.0 makes the COMET metric test fail. COMET requires torchmetrics==0.6.0 anyway.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3589/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3589", "html_url": "https://github.com/huggingface/datasets/pull/3589", "diff_url": "https://github.com/huggingface/datasets/pull/3589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3589.patch", "merged_at": 1642503895000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3588/comments
https://api.github.com/repos/huggingface/datasets/issues/3588/events
https://github.com/huggingface/datasets/pull/3588
1,106,749,000
PR_kwDODunzps4xMdiC
3,588
Update HellaSwag README.md
{ "login": "borgr", "id": 6416600, "node_id": "MDQ6VXNlcjY0MTY2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borgr", "html_url": "https://github.com/borgr", "followers_url": "https://api.github.com/users/borgr/followers", "following_url": "https://api.github.com/users/borgr/following{/other_user}", "gists_url": "https://api.github.com/users/borgr/gists{/gist_id}", "starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borgr/subscriptions", "organizations_url": "https://api.github.com/users/borgr/orgs", "repos_url": "https://api.github.com/users/borgr/repos", "events_url": "https://api.github.com/users/borgr/events{/privacy}", "received_events_url": "https://api.github.com/users/borgr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,502,775,000
1,642,697,863,000
1,642,697,863,000
CONTRIBUTOR
null
Adding information from the git repo and paper that were missing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3588/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3588", "html_url": "https://github.com/huggingface/datasets/pull/3588", "diff_url": "https://github.com/huggingface/datasets/pull/3588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3588.patch", "merged_at": 1642697863000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3587/comments
https://api.github.com/repos/huggingface/datasets/issues/3587/events
https://github.com/huggingface/datasets/issues/3587
1,106,719,182
I_kwDODunzps5B9zHO
3,587
No module named 'fsspec.archive'
{ "login": "shuuchen", "id": 13246825, "node_id": "MDQ6VXNlcjEzMjQ2ODI1", "avatar_url": "https://avatars.githubusercontent.com/u/13246825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shuuchen", "html_url": "https://github.com/shuuchen", "followers_url": "https://api.github.com/users/shuuchen/followers", "following_url": "https://api.github.com/users/shuuchen/following{/other_user}", "gists_url": "https://api.github.com/users/shuuchen/gists{/gist_id}", "starred_url": "https://api.github.com/users/shuuchen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shuuchen/subscriptions", "organizations_url": "https://api.github.com/users/shuuchen/orgs", "repos_url": "https://api.github.com/users/shuuchen/repos", "events_url": "https://api.github.com/users/shuuchen/events{/privacy}", "received_events_url": "https://api.github.com/users/shuuchen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,642,501,021,000
1,642,501,990,000
1,642,501,990,000
NONE
null
## Describe the bug Cannot import datasets after installation. ## Steps to reproduce the bug ```shell $ python Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 61, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 28, in <module> from .features import ( File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/__init__.py", line 2, in <module> from .audio import Audio File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/audio.py", line 7, in <module> from ..utils.streaming_download_manager import xopen File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 18, in <module> from ..filesystems import COMPRESSION_FILESYSTEMS File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/__init__.py", line 6, in <module> from . import compression File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/compression.py", line 5, in <module> from fsspec.archive import AbstractArchiveFileSystem ModuleNotFoundError: No module named 'fsspec.archive' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3587/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3586/comments
https://api.github.com/repos/huggingface/datasets/issues/3586/events
https://github.com/huggingface/datasets/issues/3586
1,106,455,672
I_kwDODunzps5B8yx4
3,586
Revisit `enable/disable_` toggle function prefix
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
1,642,478,995,000
1,647,270,068,000
1,647,270,068,000
MEMBER
null
As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to - De-deprecating `disable_progress_bar()` - Adding `enable_progress_bar()` - On the caching side, adding `enable_caching` and `disable_caching` Additional decisions have to be made with regards to the existing `set_enabled_X` functions; that is, whether to keep them as is or deprecate them in favor of the aforementioned functions. cc @mariosasko @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3586/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3586/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3585/comments
https://api.github.com/repos/huggingface/datasets/issues/3585/events
https://github.com/huggingface/datasets/issues/3585
1,105,821,470
I_kwDODunzps5B6X8e
3,585
Datasets streaming + map doesn't work for `Audio`
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This seems related to https://github.com/huggingface/datasets/issues/3505." ]
1,642,424,142,000
1,642,685,280,000
1,642,685,280,000
MEMBER
null
## Describe the bug When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("common_voice", "en", streaming=True, split="train") def map_fn(batch): print("audio keys", batch["audio"].keys()) batch["audio"] = batch["audio"]["array"][:100] return batch ds = ds.map(map_fn) sample = next(iter(ds)) ``` I think the audio is somehow decoded before `.map(...)` is actually called. ## Expected results IMO, the above code snippet should work. ## Actual results ```bash audio keys dict_keys(['path', 'bytes']) Traceback (most recent call last): File "./run_audio.py", line 15, in <module> sample = next(iter(ds)) File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__ yield key, self.function(example) File "./run_audio.py", line 9, in map_fn batch["input"] = batch["audio"]["array"][:100] KeyError: 'array' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.1.dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3585/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3584/comments
https://api.github.com/repos/huggingface/datasets/issues/3584/events
https://github.com/huggingface/datasets/issues/3584
1,105,231,768
I_kwDODunzps5B4H-Y
3,584
https://huggingface.co/datasets/huggingface/transformers-metadata
{ "login": "ecankirkic", "id": 37082592, "node_id": "MDQ6VXNlcjM3MDgyNTky", "avatar_url": "https://avatars.githubusercontent.com/u/37082592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ecankirkic", "html_url": "https://github.com/ecankirkic", "followers_url": "https://api.github.com/users/ecankirkic/followers", "following_url": "https://api.github.com/users/ecankirkic/following{/other_user}", "gists_url": "https://api.github.com/users/ecankirkic/gists{/gist_id}", "starred_url": "https://api.github.com/users/ecankirkic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ecankirkic/subscriptions", "organizations_url": "https://api.github.com/users/ecankirkic/orgs", "repos_url": "https://api.github.com/users/ecankirkic/repos", "events_url": "https://api.github.com/users/ecankirkic/events{/privacy}", "received_events_url": "https://api.github.com/users/ecankirkic/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[]
1,642,378,694,000
1,644,828,687,000
1,644,828,687,000
NONE
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3584/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3583/comments
https://api.github.com/repos/huggingface/datasets/issues/3583/events
https://github.com/huggingface/datasets/issues/3583
1,105,195,144
I_kwDODunzps5B3_CI
3,583
Add The Medical Segmentation Decathlon Dataset
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
{ "login": "pri1311", "id": 64613009, "node_id": "MDQ6VXNlcjY0NjEzMDA5", "avatar_url": "https://avatars.githubusercontent.com/u/64613009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pri1311", "html_url": "https://github.com/pri1311", "followers_url": "https://api.github.com/users/pri1311/followers", "following_url": "https://api.github.com/users/pri1311/following{/other_user}", "gists_url": "https://api.github.com/users/pri1311/gists{/gist_id}", "starred_url": "https://api.github.com/users/pri1311/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pri1311/subscriptions", "organizations_url": "https://api.github.com/users/pri1311/orgs", "repos_url": "https://api.github.com/users/pri1311/repos", "events_url": "https://api.github.com/users/pri1311/events{/privacy}", "received_events_url": "https://api.github.com/users/pri1311/received_events", "type": "User", "site_admin": false }
[ { "login": "pri1311", "id": 64613009, "node_id": "MDQ6VXNlcjY0NjEzMDA5", "avatar_url": "https://avatars.githubusercontent.com/u/64613009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pri1311", "html_url": "https://github.com/pri1311", "followers_url": "https://api.github.com/users/pri1311/followers", "following_url": "https://api.github.com/users/pri1311/following{/other_user}", "gists_url": "https://api.github.com/users/pri1311/gists{/gist_id}", "starred_url": "https://api.github.com/users/pri1311/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pri1311/subscriptions", "organizations_url": "https://api.github.com/users/pri1311/orgs", "repos_url": "https://api.github.com/users/pri1311/repos", "events_url": "https://api.github.com/users/pri1311/events{/privacy}", "received_events_url": "https://api.github.com/users/pri1311/received_events", "type": "User", "site_admin": false } ]
null
[ "Hello! I have recently been involved with a medical image segmentation project myself and was going through the `The Medical Segmentation Decathlon Dataset` as well. \r\nI haven't yet had experience adding datasets to this repository yet but would love to get started. Should I take this issue?\r\nIf yes, I've got two questions -\r\n1. There are 10 different datasets available, so are all datasets to be added in a single PR, or one at a time? \r\n2. Since it's a competition, masks for the test-set are not available. How is that to be tackled? Sorry if it's a silly question, I have recently started exploring `datasets`.", "Hi! Sure, feel free to take this issue. You can self-assign the issue by commenting `#self-assign`.\r\n\r\nTo answer your questions:\r\n1. It makes the most sense to add each one as a separate config, so one dataset script with 10 configs in a single PR.\r\n2. Just set masks in the test set to `None`.\r\n\r\nNote that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help with that). \r\n\r\n", "> Note that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help with that).\r\n\r\nGotcha, thanks. Will start working on the issue and let you know in case of any doubt.", "#self-assign", "This is great! There is a first model on the HUb that uses this dataset! https://huggingface.co/MONAI/example_spleen_segmentation" ]
1,642,369,345,000
1,647,600,282,000
null
NONE
null
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:** [link to the dataset paper if available](https://arxiv.org/abs/2106.05735) - **Data:** http://medicaldecathlon.com/ - **Motivation:** Hugging Face seeks to democratize ML for society. One of the growing niches within ML is the ML + Medicine community. Key data sets will help increase the supply of HF resources for starting an initial community. (cc @osanseviero @abidlabs ) Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3583/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3582/comments
https://api.github.com/repos/huggingface/datasets/issues/3582/events
https://github.com/huggingface/datasets/issues/3582
1,104,877,303
I_kwDODunzps5B2xb3
3,582
conll 2003 dataset source url is no longer valid
{ "login": "rcanand", "id": 303900, "node_id": "MDQ6VXNlcjMwMzkwMA==", "avatar_url": "https://avatars.githubusercontent.com/u/303900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcanand", "html_url": "https://github.com/rcanand", "followers_url": "https://api.github.com/users/rcanand/followers", "following_url": "https://api.github.com/users/rcanand/following{/other_user}", "gists_url": "https://api.github.com/users/rcanand/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcanand/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcanand/subscriptions", "organizations_url": "https://api.github.com/users/rcanand/orgs", "repos_url": "https://api.github.com/users/rcanand/repos", "events_url": "https://api.github.com/users/rcanand/events{/privacy}", "received_events_url": "https://api.github.com/users/rcanand/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I came to open the same issue.", "Thanks for reporting !\r\n\r\nI pushed a temporary fix on `master` that uses an URL from a previous commit to access the dataset for now, until we have a better solution", "I changed the URL again to use another host, the fix is available on `master` and we'll probably do a new release of `datasets` tomorrow.\r\n\r\nIn the meantime, feel free to do `load_dataset(..., revision=\"master\")` to use the fixed script", "We just released a new version of `datasets` with a working URL. Feel free to update `datasets` and try again :)" ]
1,642,287,857,000
1,642,784,252,000
1,642,784,252,000
NONE
null
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual results It is looking for the dataset at `https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt` but it was removed from there yesterday (see [commit](https://github.com/davidsbatista/NER-datasets/commit/9d8f45cc7331569af8eb3422bbe1c97cbebd5690) that removed the file and related [issue](https://github.com/davidsbatista/NER-datasets/issues/8)). - We should replace this with an alternate valid location. - this is being referenced in the huggingface course chapter 7 [colab notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section2_pt.ipynb), which is also broken. ```python FileNotFoundError Traceback (most recent call last) <ipython-input-4-27c956bec93c> in <module>() 1 from datasets import load_dataset 2 ----> 3 raw_datasets = load_dataset("conll2003") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params) 610 ) 611 elif response is not None and response.status_code == 404: --> 612 raise FileNotFoundError(f"Couldn't find file at {url}") 613 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 614 if head_error is not None: FileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3582/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 5, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3582/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3581/comments
https://api.github.com/repos/huggingface/datasets/issues/3581/events
https://github.com/huggingface/datasets/issues/3581
1,104,857,822
I_kwDODunzps5B2sre
3,581
Unable to create a dataset from a parquet file in S3
{ "login": "regCode", "id": 18012903, "node_id": "MDQ6VXNlcjE4MDEyOTAz", "avatar_url": "https://avatars.githubusercontent.com/u/18012903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/regCode", "html_url": "https://github.com/regCode", "followers_url": "https://api.github.com/users/regCode/followers", "following_url": "https://api.github.com/users/regCode/following{/other_user}", "gists_url": "https://api.github.com/users/regCode/gists{/gist_id}", "starred_url": "https://api.github.com/users/regCode/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/regCode/subscriptions", "organizations_url": "https://api.github.com/users/regCode/orgs", "repos_url": "https://api.github.com/users/regCode/repos", "events_url": "https://api.github.com/users/regCode/events{/privacy}", "received_events_url": "https://api.github.com/users/regCode/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! Currently it only works with local paths, file-like objects are not supported yet" ]
1,642,282,456,000
1,644,828,777,000
null
NONE
null
## Describe the bug Trying to create a dataset from a parquet file in S3. ## Steps to reproduce the bug ```python import s3fs from datasets import Dataset s3 = s3fs.S3FileSystem(anon=False) with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: dataset = Dataset.from_parquet(s3file) ``` ## Expected results A new Dataset object ## Actual results ```AttributeError: 'S3File' object has no attribute 'decode'``` ``` AttributeError Traceback (most recent call last) <command-2452877612515691> in <module> 5 6 with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: ----> 7 dataset = Dataset.from_parquet(s3file) /databricks/python/lib/python3.8/site-packages/datasets/arrow_dataset.py in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs) 907 from .io.parquet import ParquetDatasetReader 908 --> 909 return ParquetDatasetReader( 910 path_or_paths, 911 split=split, /databricks/python/lib/python3.8/site-packages/datasets/io/parquet.py in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, **kwargs) 28 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths} 29 hash = _PACKAGED_DATASETS_MODULES["parquet"][1] ---> 30 self.builder = Parquet( 31 cache_dir=cache_dir, 32 data_files=path_or_paths, /databricks/python/lib/python3.8/site-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, base_path, info, features, use_auth_token, namespace, data_files, data_dir, **config_kwargs) 246 247 if data_files is not None and not isinstance(data_files, DataFilesDict): --> 248 data_files = DataFilesDict.from_local_or_remote( 249 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token 250 ) /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 576 for key, patterns_for_key in patterns.items(): 577 out[key] = ( --> 578 DataFilesList.from_local_or_remote( 579 patterns_for_key, 580 base_path=base_path, /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 544 ) -> "DataFilesList": 545 base_path = base_path if base_path is not None else str(Path().resolve()) --> 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token) 548 return cls(data_files, origin_metadata) /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) 191 data_files = [] 192 for pattern in patterns: --> 193 if is_remote_url(pattern): 194 data_files.append(Url(pattern)) 195 else: /databricks/python/lib/python3.8/site-packages/datasets/utils/file_utils.py in is_remote_url(url_or_filename) 115 116 def is_remote_url(url_or_filename: str) -> bool: --> 117 parsed = urlparse(url_or_filename) 118 return parsed.scheme in ("http", "https", "s3", "gs", "hdfs", "ftp") 119 /usr/lib/python3.8/urllib/parse.py in urlparse(url, scheme, allow_fragments) 370 Note that we don't break the components up in smaller bits 371 (e.g. netloc is a single string) and we don't expand % escapes.""" --> 372 url, scheme, _coerce_result = _coerce_args(url, scheme) 373 splitresult = urlsplit(url, scheme, allow_fragments) 374 scheme, netloc, url, query, fragment = splitresult /usr/lib/python3.8/urllib/parse.py in _coerce_args(*args) 122 if str_input: 123 return args + (_noop,) --> 124 return _decode_args(args) + (_encode_result,) 125 126 # Result objects are more helpful than simple tuples /usr/lib/python3.8/urllib/parse.py in _decode_args(args, encoding, errors) 106 def _decode_args(args, encoding=_implicit_encoding, 107 errors=_implicit_errors): --> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args) 109 110 def _coerce_args(*args): /usr/lib/python3.8/urllib/parse.py in <genexpr>(.0) 106 def _decode_args(args, encoding=_implicit_encoding, 107 errors=_implicit_errors): --> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args) 109 110 def _coerce_args(*args): AttributeError: 'S3File' object has no attribute 'decode' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Ubuntu 20.04.3 LTS - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3581/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3580/comments
https://api.github.com/repos/huggingface/datasets/issues/3580/events
https://github.com/huggingface/datasets/issues/3580
1,104,663,242
I_kwDODunzps5B19LK
3,580
Bug in wiki bio load
{ "login": "tuhinjubcse", "id": 3104771, "node_id": "MDQ6VXNlcjMxMDQ3NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuhinjubcse", "html_url": "https://github.com/tuhinjubcse", "followers_url": "https://api.github.com/users/tuhinjubcse/followers", "following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}", "gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}", "starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions", "organizations_url": "https://api.github.com/users/tuhinjubcse/orgs", "repos_url": "https://api.github.com/users/tuhinjubcse/repos", "events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}", "received_events_url": "https://api.github.com/users/tuhinjubcse/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "+1, here's the error I got: \r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>>\r\n>>> load_dataset(\"wiki_bio\")\r\nDownloading: 7.58kB [00:00, 4.42MB/s]\r\nDownloading: 2.71kB [00:00, 1.30MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to /home/jxm3/.cache/huggingface/datasets/wiki_bio/default/1.1.0/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/load.py\", line 1694, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/builder.py\", line 662, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/wiki_bio/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9/wiki_bio.py\", line 125, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 308, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 251, in map_nested\r\n return function(data_struct)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 298, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 612, in get_from_cache\r\n raise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil\r\n>>>\r\n```\r\n", "@alejandrocros and @lhoestq - you added the wiki_bio dataset in #1173. It doesn't work anymore. Can you take a look at this?", "And if something is wrong with Google Drive, you could try to download (and collate and unzip) from here: https://github.com/DavidGrangier/wikipedia-biography-dataset", "Hi ! Thanks for reporting. I've downloaded the data and concatenated them into a zip file available here: https://huggingface.co/datasets/wiki_bio/tree/main/data\r\n\r\nI guess we can update the dataset script to use this zip file now :)" ]
1,642,241,073,000
1,643,618,289,000
1,643,618,289,000
NONE
null
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com/3104771/149617875-ef0e30b0-b76e-48cf-b3eb-93ba8e6e5465.png) a
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3580/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3580/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3579/comments
https://api.github.com/repos/huggingface/datasets/issues/3579/events
https://github.com/huggingface/datasets/pull/3579
1,103,451,118
PR_kwDODunzps4xBmY4
3,579
Add Text2log Dataset
{ "login": "apergo-ai", "id": 68908804, "node_id": "MDQ6VXNlcjY4OTA4ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apergo-ai", "html_url": "https://github.com/apergo-ai", "followers_url": "https://api.github.com/users/apergo-ai/followers", "following_url": "https://api.github.com/users/apergo-ai/following{/other_user}", "gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions", "organizations_url": "https://api.github.com/users/apergo-ai/orgs", "repos_url": "https://api.github.com/users/apergo-ai/repos", "events_url": "https://api.github.com/users/apergo-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/apergo-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The CI fails are unrelated to your PR and fixed on master, I think we can merge now !" ]
1,642,157,101,000
1,642,698,584,000
1,642,698,584,000
CONTRIBUTOR
null
Adding the text2log dataset used for training FOL sentence translating models
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3579/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3579", "html_url": "https://github.com/huggingface/datasets/pull/3579", "diff_url": "https://github.com/huggingface/datasets/pull/3579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3579.patch", "merged_at": 1642698584000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3578/comments
https://api.github.com/repos/huggingface/datasets/issues/3578/events
https://github.com/huggingface/datasets/issues/3578
1,103,403,287
I_kwDODunzps5BxJkX
3,578
label information get lost after parquet serialization
{ "login": "Tudyx", "id": 56633664, "node_id": "MDQ6VXNlcjU2NjMzNjY0", "avatar_url": "https://avatars.githubusercontent.com/u/56633664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tudyx", "html_url": "https://github.com/Tudyx", "followers_url": "https://api.github.com/users/Tudyx/followers", "following_url": "https://api.github.com/users/Tudyx/following{/other_user}", "gists_url": "https://api.github.com/users/Tudyx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tudyx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tudyx/subscriptions", "organizations_url": "https://api.github.com/users/Tudyx/orgs", "repos_url": "https://api.github.com/users/Tudyx/repos", "events_url": "https://api.github.com/users/Tudyx/events{/privacy}", "received_events_url": "https://api.github.com/users/Tudyx/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! We did a release of `datasets` today that may fix this issue. Can you try updating `datasets` and trying again ?\r\n\r\nEDIT: the issue is still there actually\r\n\r\nI think we can fix that by storing the Features in the parquet schema metadata, and then reload them when loading the parquet file" ]
1,642,155,038,000
1,643,095,301,000
null
NONE
null
## Describe the bug In *dataset_info.json* file, information about the label get lost after the dataset serialization. ## Steps to reproduce the bug ```python from datasets import load_dataset # normal save dataset = load_dataset('glue', 'sst2', split='train') dataset.save_to_disk("normal_save") # save after parquet serialization dataset.to_parquet("glue-sst2-train.parquet") dataset = load_dataset("parquet", data_files='glue-sst2-train.parquet') dataset.save_to_disk("save_after_parquet") ``` ## Expected results I expected to keep label information in *dataset_info.json* file even after parquet serialization ## Actual results In the normal serialization i got ```json "label": { "num_classes": 2, "names": [ "negative", "positive" ], "names_file": null, "id": null, "_type": "ClassLabel" }, ``` And after parquet serialization i got ```json "label": { "dtype": "int64", "id": null, "_type": "Value" }, ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: ubuntu 20.04 - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3578/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3577/comments
https://api.github.com/repos/huggingface/datasets/issues/3577/events
https://github.com/huggingface/datasets/issues/3577
1,102,598,241
I_kwDODunzps5BuFBh
3,577
Add The Mexican Emotional Speech Database (MESD)
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
open
false
null
[]
null
[]
1,642,117,776,000
1,643,292,878,000
null
NONE
null
## Adding a Dataset - **Name:** *The Mexican Emotional Speech Database (MESD)* - **Description:** *Contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. * - **Paper:** *[Paper](https://ieeexplore.ieee.org/abstract/document/9629934/authors#authors)* - **Data:** *[link to the Github repository or current dataset location](https://data.mendeley.com/datasets/cy34mh68j9/3)* - **Motivation:** *Would add Spanish speech data to the HF datasets :) * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3577/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3576/comments
https://api.github.com/repos/huggingface/datasets/issues/3576/events
https://github.com/huggingface/datasets/pull/3576
1,102,059,651
PR_kwDODunzps4w8sUm
3,576
Add PASS dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,094,167,000
1,642,697,448,000
1,642,697,447,000
CONTRIBUTOR
null
This PR adds the PASS dataset. Closes #3043
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3576/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3576", "html_url": "https://github.com/huggingface/datasets/pull/3576", "diff_url": "https://github.com/huggingface/datasets/pull/3576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3576.patch", "merged_at": 1642697447000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3575/comments
https://api.github.com/repos/huggingface/datasets/issues/3575/events
https://github.com/huggingface/datasets/pull/3575
1,101,947,955
PR_kwDODunzps4w8Usm
3,575
Add Arrow type casting to struct for Image and Audio + Support nested casting
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Regarding the tests I'm just missing the FixedSizeListType type casting for ListArray objects, will to it tomorrow as well as adding new tests + docstrings\r\n\r\nand also adding soundfile in the CI", "While writing some tests I noticed that the ExtensionArray can't be directly concatenated - maybe we can get rid of the extension types/arrays and only keep their storages in native arrow types.\r\n\r\nIn this case the `cast_storage` functions should be the responsibility of the Image and Audio classes directly. And therefore we would need to never cast to a pyarrow type again but to a HF feature - since they'd end up being the one able to tell what's castable or not. This is fine in my opinion but let me know what you think. I can take care of this on monday I think", "Alright I got rid of all the extension type stuff, I'm writing the new tests now :)", "Tests are done, I'll finish the comments and docstrings tomorrow and set the PR on ready for review once it's done !", "> While writing some tests I noticed that the ExtensionArray can't be directly concatenated - maybe we can get rid of the extension types/arrays and only keep their storages in native arrow types.\r\n>\r\n>In this case the cast_storage functions should be the responsibility of the Image and Audio classes directly. And therefore we would need two never cast to a pyarrow type again but to a HF feature - since they'd end up being the one able to tell what's castable or not. This is fine in my opinion but let me know what you think. I can take care of this on monday I think\r\n\r\nDoes this change affect performance?", "> Does this change affect performance?\r\n\r\nIn general it shouldn't have a significant impact on performance since the structure of the features is rarely complex (in general we have <20 features and <4 levels of nesting)\r\n\r\nRegarding Audio and Image specifically, casting from a StringArray is a little bit more costly since it creates the \"bytes\" BinaryArray with `None` values with the same length as the \"path\" array. From the tests I did locally this is very fast though and shouldn't affect the user experience at the current scale of the audio/image datasets we have. It also requires a little bit of RAM though\r\n", "Alright this is ready for review now ! Let me know if you have comments and/or improvements :)" ]
1,642,088,219,000
1,642,771,348,000
1,642,771,347,000
MEMBER
null
## Intro 1. Currently, it's not possible to have nested features containing Audio or Image. 2. Moreover one can keep an Arrow array as a StringArray to store paths to images, but such arrays can't be directly concatenated to another image array if it's stored an another Arrow type (typically, a StructType). 3. Allowing several Arrow types for a single HF feature type also leads to bugs like this one #3497 4. Issues like #3247 are quite frequent and happen when Arrow fails to reorder StructArrays. 5. Casting Audio feature type is blocking preparation for the ASR task template: https://github.com/huggingface/datasets/pull/3364 All those issues are linked together by the fact that: - we are limited by the Arrow type casting which is lacking features for nested types. - and especially for Audio and Image: they are not robust enough for concatenation and feature inference. ## Proposed solution To fix 1 and 4 I implemented nested array type casting (which is missing in PyArrow). To fix 2, 3 and 5 while having a simple implementation for nested array type casting, I changed the storage type of Audio and Image to always be a StructType. Also casting from StringType is directly implemented via a new function `cast_storage` that is defined individually for Audio and Image. I also added nested decoding. ## Implementation details ### I. Better Arrow data type casting for nested data structures I implemented new functions `array_cast` and `table_cast` that do the exact same as `pyarrow.Array.cast` or `pyarrow.Table.cast` but support nested struct casting and array re-ordering. These functions can be used on PyArrow objects, and are already integrated in our own `datasets.table.Table.cast` functions. So one can do `my_dataset.data.cast(pyarrow_schema_with_custom_hf_types)` directly. ### II. New image and audio extension types with custom casting I used PyArrow extension types to be able to define what casting is allowed or not. For example both StringType->ImageExtensionType and StructType->ImageExtensionType are allowed, via the `cast_storage` method. I factorized all the PyArrow + Pandas extension stuff in the `base_extension.py` file. This aims at separating the front-facing API code of `datasets` from the Arrow back-end which requires advanced knowledge. ### III. Nested feature decoding I added a new function `decode_nested_example` to decode image and audio data in nested data structures. For optimization's sake, this function is only called if a column has at least one feature that requires decoding. ## Alternative considered The casting to struct type could have been done directly with python objects using some Audio and Image methods, but bringing arrow data to python objects is expensive. The Audio and Image types could also have been able to convert the arrow data directly, but this is not convenient to use when casting a full Arrow Table with nested fields. Therefore I decided to keep the Arrow data casting logic in Arrow extension types. ## Future work This work can be used to allow the ArrayND feature types to be nested too (see issue #887) ## TODO - [x] fix current tests - [x] add new tests - [x] docstrings/comments
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3575/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3575/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3575", "html_url": "https://github.com/huggingface/datasets/pull/3575", "diff_url": "https://github.com/huggingface/datasets/pull/3575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3575.patch", "merged_at": 1642771347000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3574/comments
https://api.github.com/repos/huggingface/datasets/issues/3574/events
https://github.com/huggingface/datasets/pull/3574
1,101,781,401
PR_kwDODunzps4w7vu6
3,574
Fix qa4mre tags
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,082,219,000
1,642,082,582,000
1,642,082,581,000
MEMBER
null
The YAML tags were invalid. I also fixed the dataset mirroring logging that failed because of this issue [here](https://github.com/huggingface/datasets/actions/runs/1690109581)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3574/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3574", "html_url": "https://github.com/huggingface/datasets/pull/3574", "diff_url": "https://github.com/huggingface/datasets/pull/3574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3574.patch", "merged_at": 1642082581000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3573/comments
https://api.github.com/repos/huggingface/datasets/issues/3573/events
https://github.com/huggingface/datasets/pull/3573
1,101,157,676
PR_kwDODunzps4w5oE_
3,573
Add Mauve metric
{ "login": "jthickstun", "id": 2321244, "node_id": "MDQ6VXNlcjIzMjEyNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2321244?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jthickstun", "html_url": "https://github.com/jthickstun", "followers_url": "https://api.github.com/users/jthickstun/followers", "following_url": "https://api.github.com/users/jthickstun/following{/other_user}", "gists_url": "https://api.github.com/users/jthickstun/gists{/gist_id}", "starred_url": "https://api.github.com/users/jthickstun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jthickstun/subscriptions", "organizations_url": "https://api.github.com/users/jthickstun/orgs", "repos_url": "https://api.github.com/users/jthickstun/repos", "events_url": "https://api.github.com/users/jthickstun/events{/privacy}", "received_events_url": "https://api.github.com/users/jthickstun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! The CI was failing because `mauve-text` wasn't installed. I added it to the CI setup :)\r\n\r\nI also did some minor changes to the script itself, especially to remove `**kwargs` and explicitly mentioned all the supported arguments (this way if someone does a typo with some parameters they get an error)" ]
1,642,045,968,000
1,642,690,808,000
1,642,690,808,000
CONTRIBUTOR
null
Add support for the [Mauve](https://github.com/krishnap25/mauve) metric introduced in this [paper](https://arxiv.org/pdf/2102.01454.pdf) (Neurips, 2021).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3573/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3573", "html_url": "https://github.com/huggingface/datasets/pull/3573", "diff_url": "https://github.com/huggingface/datasets/pull/3573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3573.patch", "merged_at": 1642690807000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3572/comments
https://api.github.com/repos/huggingface/datasets/issues/3572/events
https://github.com/huggingface/datasets/issues/3572
1,100,634,244
I_kwDODunzps5BmliE
3,572
ConnectionError in IndicGLUE dataset
{ "login": "sahoodib", "id": 79107194, "node_id": "MDQ6VXNlcjc5MTA3MTk0", "avatar_url": "https://avatars.githubusercontent.com/u/79107194?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sahoodib", "html_url": "https://github.com/sahoodib", "followers_url": "https://api.github.com/users/sahoodib/followers", "following_url": "https://api.github.com/users/sahoodib/following{/other_user}", "gists_url": "https://api.github.com/users/sahoodib/gists{/gist_id}", "starred_url": "https://api.github.com/users/sahoodib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sahoodib/subscriptions", "organizations_url": "https://api.github.com/users/sahoodib/orgs", "repos_url": "https://api.github.com/users/sahoodib/repos", "events_url": "https://api.github.com/users/sahoodib/events{/privacy}", "received_events_url": "https://api.github.com/users/sahoodib/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "@sahoodib, thanks for reporting.\r\n\r\nIndeed, none of the data links appearing in the IndicGLUE website are working, e.g.: https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/soham-articles.tar.gz\r\n```\r\n<Error>\r\n<Code>UserProjectAccountProblem</Code>\r\n<Message>User project billing account not in good standing.</Message>\r\n<Details>\r\nThe billing account for the owning project is disabled in state delinquent\r\n</Details>\r\n</Error>\r\n```\r\n\r\nWe have contacted the data owners to inform them about their issue and ask them if they plan to fix it." ]
1,642,010,376,000
1,644,830,226,000
null
NONE
null
While I am trying to load IndicGLUE dataset (https://huggingface.co/datasets/indic_glue) it is giving me with the error: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3572/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3571
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3571/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3571/comments
https://api.github.com/repos/huggingface/datasets/issues/3571/events
https://github.com/huggingface/datasets/pull/3571
1,100,519,604
PR_kwDODunzps4w3fVQ
3,571
Add missing tasks to MuchoCine dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,003,652,000
1,642,697,468,000
1,642,697,467,000
CONTRIBUTOR
null
Addresses the 2nd bullet point in #2520. I'm also removing the licensing information, because I couldn't verify that it is correct.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3571/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3571", "html_url": "https://github.com/huggingface/datasets/pull/3571", "diff_url": "https://github.com/huggingface/datasets/pull/3571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3571.patch", "merged_at": 1642697467000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3570/comments
https://api.github.com/repos/huggingface/datasets/issues/3570/events
https://github.com/huggingface/datasets/pull/3570
1,100,480,791
PR_kwDODunzps4w3Xez
3,570
Add the KMWP dataset (extension of #3564)
{ "login": "sooftware", "id": 42150335, "node_id": "MDQ6VXNlcjQyMTUwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sooftware", "html_url": "https://github.com/sooftware", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "organizations_url": "https://api.github.com/users/sooftware/orgs", "repos_url": "https://api.github.com/users/sooftware/repos", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "received_events_url": "https://api.github.com/users/sooftware/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Sorry, I'm late to check! I'll send it to you soon!" ]
1,642,001,588,000
1,643,163,408,000
null
NONE
null
New pull request of #3564 (Add the KMWP dataset)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3570/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3570", "html_url": "https://github.com/huggingface/datasets/pull/3570", "diff_url": "https://github.com/huggingface/datasets/pull/3570.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3570.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3569/comments
https://api.github.com/repos/huggingface/datasets/issues/3569/events
https://github.com/huggingface/datasets/pull/3569
1,100,478,994
PR_kwDODunzps4w3XGo
3,569
Add the DKTC dataset (Extension of #3564)
{ "login": "sooftware", "id": 42150335, "node_id": "MDQ6VXNlcjQyMTUwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sooftware", "html_url": "https://github.com/sooftware", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "organizations_url": "https://api.github.com/users/sooftware/orgs", "repos_url": "https://api.github.com/users/sooftware/repos", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "received_events_url": "https://api.github.com/users/sooftware/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I reflect your comment! @lhoestq ", "Wait, the format of the data just changed, so I'll take it into consideration and commit it.", "I update the code according to the dataset structure change.", "Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).", "> Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).\r\n\r\nHi! @lhoestq There is a problem. \r\n<img src=\"https://user-images.githubusercontent.com/42150335/149804142-3800e635-f5a0-44d9-9694-0c2b0c05f16b.png\" width=500>\r\n \r\nAs shown in the picture above, the conversation is divided into \"\\n\" in the \"conversion\" column. \r\nThat's why there's an error in the file path that only saved only five lines like below. \r\n\r\n```\r\n'idx', 'class', 'conversation'\r\n'0', '협박 대화', '\"지금 너 스스로를 죽여달라고 애원하는 것인가?'\r\n아닙니다. 죄송합니다.'\r\n죽을 거면 혼자 죽지 우리까지 사건에 휘말리게 해? 진짜 죽여버리고 싶게.'\r\n정말 잘못했습니다.\r\n```\r\n \r\nIn fact, these five lines are all one line. \r\n \r\n\r\n", "Hi ! I see, in this case ca you make sure that the dummy data has a full sample ?\r\n\r\nFeel free to open the dummy train.csv in the dummy_data.zip file and add the missing lines", "Sorry, I'm late to check! I'll send it to you soon!" ]
1,642,001,489,000
1,643,163,381,000
null
NONE
null
New pull request of #3564. (for DKTC)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3569/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3569", "html_url": "https://github.com/huggingface/datasets/pull/3569", "diff_url": "https://github.com/huggingface/datasets/pull/3569.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3569.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3568/comments
https://api.github.com/repos/huggingface/datasets/issues/3568/events
https://github.com/huggingface/datasets/issues/3568
1,100,380,631
I_kwDODunzps5BlnnX
3,568
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError
{ "login": "fabianslife", "id": 49265757, "node_id": "MDQ6VXNlcjQ5MjY1NzU3", "avatar_url": "https://avatars.githubusercontent.com/u/49265757?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabianslife", "html_url": "https://github.com/fabianslife", "followers_url": "https://api.github.com/users/fabianslife/followers", "following_url": "https://api.github.com/users/fabianslife/following{/other_user}", "gists_url": "https://api.github.com/users/fabianslife/gists{/gist_id}", "starred_url": "https://api.github.com/users/fabianslife/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabianslife/subscriptions", "organizations_url": "https://api.github.com/users/fabianslife/orgs", "repos_url": "https://api.github.com/users/fabianslife/repos", "events_url": "https://api.github.com/users/fabianslife/events{/privacy}", "received_events_url": "https://api.github.com/users/fabianslife/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @fabianslife, thanks for reporting.\r\n\r\nI think you were using an old version of `datasets` because this bug was already fixed in version `1.13.0` (13 Oct 2021):\r\n- Fix: 55fd140a63b8f03a0e72985647e498f1fc799d3f\r\n- PR: #3046\r\n- Issue: #2969 \r\n\r\nPlease, feel free to update the library: `pip install -U datasets`." ]
1,641,996,224,000
1,644,831,154,000
1,644,831,154,000
NONE
null
I wanted to download the Nedical Dialog Dataset from huggingface, using this github link: https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is: ``` import copy import os import re import datasets _CITATION = """\ @article{chen2020meddiag, title={MedDialog: a large-scale medical dialogue dataset}, author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao}, journal={arXiv preprint arXiv:2004.03329}, year={2020} } """ _DESCRIPTION = """\ The MedDialog dataset (English) contains conversations (in English) between doctors and patients.\ It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. \ The raw dialogues are from healthcaremagic.com and icliniq.com.\ All copyrights of the data belong to healthcaremagic.com and icliniq.com. """ _HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System" _LICENSE = "" class MedicalDialog(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ datasets.BuilderConfig(name="en", description="The dataset of medical dialogs in English.", version=VERSION), datasets.BuilderConfig(name="zh", description="The dataset of medical dialogs in Chinese.", version=VERSION), ] @property def manual_download_instructions(self): return """\ \n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\ and manually download the dataset from Google Drive. Once it is completed, a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder( or whichever folder your browser chooses to save files to). Unzip the folder to obtain a folder named "Medical-Dialogue-Dataset-English" several text files. Now, you can specify the path to this folder for the data_dir argument in the datasets.load_dataset(...) option. The <path/to/folder> can e.g. be "/Downloads/Medical-Dialogue-Dataset-English". The data can then be loaded using the below command:\ datasets.load_dataset("medical_dialog", name="en", data_dir="/Downloads/Medical-Dialogue-Dataset-English")`. \n For Chinese:\nFollow the above process. Change the 'name' to 'zh'.The download link is https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2 **NOTE** - A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times. - After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path. """ datasets.load_dataset("medical_dialog", name="en", data_dir="Medical-Dialogue-Dataset-English") def _info(self): if self.config.name == "zh": features = datasets.Features( { "file_name": datasets.Value("string"), "dialogue_id": datasets.Value("int32"), "dialogue_url": datasets.Value("string"), "dialogue_turns": datasets.Sequence( { "speaker": datasets.ClassLabel(names=["病人", "医生"]), "utterance": datasets.Value("string"), } ), } ) if self.config.name == "en": features = datasets.Features( { "file_name": datasets.Value("string"), "dialogue_id": datasets.Value("int32"), "dialogue_url": datasets.Value("string"), "dialogue_turns": datasets.Sequence( { "speaker": datasets.ClassLabel(names=["Patient", "Doctor"]), "utterance": datasets.Value("string"), } ), } ) return datasets.DatasetInfo( # This is the description that will appear on the datasets page. description=_DESCRIPTION, features=features, supervised_keys=None, # Homepage of the dataset for documentation homepage=_HOMEPAGE, # License for the dataset if available license=_LICENSE, # Citation for the dataset citation=_CITATION, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir)) if not os.path.exists(path_to_manual_file): raise FileNotFoundError( f"{path_to_manual_file} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})" ) filepaths = [ os.path.join(path_to_manual_file, txt_file_name) for txt_file_name in sorted(os.listdir(path_to_manual_file)) if txt_file_name.endswith("txt") ] return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})] def _generate_examples(self, filepaths): """Yields examples. Iterates over each file and give the creates the corresponding features. NOTE: - The code makes some assumption on the structure of the raw .txt file. - There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added. """ data_lang = self.config.name id_ = -1 for filepath in filepaths: with open(filepath, encoding="utf-8") as f_in: # Parameters to just "sectionize" the raw data last_part = "" last_dialog = {} last_list = [] last_user = "" check_list = [] # These flags are present to have a single function address both chinese and english data # English data is a little hahazard (i.e. the sentences spans multiple different lines), # Chinese is compact with one line for doctor and patient. conv_flag = False des_flag = False while True: line = f_in.readline() if not line: break # Extracting the dialog id if line[:2] == "id": # Hardcode alert! # Handling ID references that may come in the description # These were observed in the Chinese dataset and were not # followed by numbers try: dialogue_id = int(re.findall(r"\d+", line)[0]) except IndexError: continue # Extracting the url if line[:4] == "http": # Hardcode alert! dialogue_url = line.rstrip() # Extracting the patient info from description. if line[:11] == "Description": # Hardcode alert! last_part = "description" last_dialog = {} last_list = [] last_user = "" last_conv = {"speaker": "", "utterance": ""} while True: line = f_in.readline() if (not line) or (line in ["\n", "\n\r"]): break else: if data_lang == "zh": # Condition in chinese if line[:5] == "病情描述:": # Hardcode alert! last_user = "病人" sen = f_in.readline().rstrip() des_flag = True if data_lang == "en": last_user = "Patient" sen = line.rstrip() des_flag = True if des_flag: if sen == "": continue if sen in check_list: last_conv["speaker"] = "" last_conv["utterance"] = "" else: last_conv["speaker"] = last_user last_conv["utterance"] = sen check_list.append(sen) des_flag = False break # Extracting the conversation info from dialogue. elif line[:8] == "Dialogue": # Hardcode alert! if last_part == "description" and len(last_conv["utterance"]) > 0: last_part = "dialogue" if data_lang == "zh": last_user = "病人" if data_lang == "en": last_user = "Patient" while True: line = f_in.readline() if (not line) or (line in ["\n", "\n\r"]): conv_flag = False last_user = "" last_list.append(copy.deepcopy(last_conv)) # To ensure close of conversation, only even number of sentences # are extracted last_turn = len(last_list) if int(last_turn / 2) > 0: temp = int(last_turn / 2) id_ += 1 last_dialog["file_name"] = filepath last_dialog["dialogue_id"] = dialogue_id last_dialog["dialogue_url"] = dialogue_url last_dialog["dialogue_turns"] = last_list[: temp * 2] yield id_, last_dialog break if data_lang == "zh": if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert! user = line[:2] # Hardcode alert! line = f_in.readline() conv_flag = True # The elif block is to ensure that multi-line sentences are captured. # This has been observed only in english. if data_lang == "en": if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert! user = line.replace(":", "").rstrip() line = f_in.readline() conv_flag = True elif line[:2] != "id": # Hardcode alert! conv_flag = True # Continues till the next ID is parsed if conv_flag: sen = line.rstrip() if sen == "": continue if user == last_user: last_conv["utterance"] = last_conv["utterance"] + sen else: last_user = user last_list.append(copy.deepcopy(last_conv)) last_conv["utterance"] = sen last_conv["speaker"] = user ``` running this code gives me the error: ``` File "C:\Users\Fabia\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\utils\info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=292801173, num_examples=229674, dataset_name='medical_dialog')}] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3568/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3567
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3567/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3567/comments
https://api.github.com/repos/huggingface/datasets/issues/3567/events
https://github.com/huggingface/datasets/pull/3567
1,100,296,696
PR_kwDODunzps4w2xDl
3,567
Fix push to hub to allow individual split push
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,641,991,378,000
1,641,994,141,000
null
MEMBER
null
# Description of the issue If one decides to push a split on a datasets repo, he uploads the dataset and overrides the config. However previous config splits end up being lost despite still having the dataset necessary. The new flow is the following: - query the old config from the repo - update into a new config (add/overwrite new split for example) - push the new config # Side fix - `repo_id` in HfFileSystem was wrongly typed. - I've added `indent=2` as it becomes much easier to read now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3567/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3567", "html_url": "https://github.com/huggingface/datasets/pull/3567", "diff_url": "https://github.com/huggingface/datasets/pull/3567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3567.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3566/comments
https://api.github.com/repos/huggingface/datasets/issues/3566/events
https://github.com/huggingface/datasets/pull/3566
1,100,155,902
PR_kwDODunzps4w2Tcc
3,566
Add initial electricity time series dataset
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@kashif Some commits on the PR branch are not authored by you, so could you please open a new PR and not use rebase this time :)? You can copy and paste the dataset dir to the new branch. \r\n\r\n", "making a new PR" ]
1,641,982,892,000
1,644,931,908,000
1,644,931,908,000
CONTRIBUTOR
null
Here is an initial prototype time series dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3566/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3566", "html_url": "https://github.com/huggingface/datasets/pull/3566", "diff_url": "https://github.com/huggingface/datasets/pull/3566.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3566.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3565/comments
https://api.github.com/repos/huggingface/datasets/issues/3565/events
https://github.com/huggingface/datasets/pull/3565
1,099,296,693
PR_kwDODunzps4wzjhH
3,565
Add parameter `preserve_index` to `from_pandas`
{ "login": "Sorrow321", "id": 20703486, "node_id": "MDQ6VXNlcjIwNzAzNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sorrow321", "html_url": "https://github.com/Sorrow321", "followers_url": "https://api.github.com/users/Sorrow321/followers", "following_url": "https://api.github.com/users/Sorrow321/following{/other_user}", "gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions", "organizations_url": "https://api.github.com/users/Sorrow321/orgs", "repos_url": "https://api.github.com/users/Sorrow321/repos", "events_url": "https://api.github.com/users/Sorrow321/events{/privacy}", "received_events_url": "https://api.github.com/users/Sorrow321/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> \r\n\r\nI did `make style` and it affected over 500 files\r\n\r\n```\r\nAll done! ✨ 🍰 ✨\r\n575 files reformatted, 372 files left unchanged.\r\nisort tests src benchmarks datasets/**/*.py metri\r\n```\r\n\r\n(result)\r\n![image](https://user-images.githubusercontent.com/20703486/149166681-2f9d1bc4-116a-4f53-ad42-e54e3b8bd605.png)\r\n", "Nvm I was using wrong black version" ]
1,641,914,797,000
1,642,003,887,000
1,642,003,887,000
CONTRIBUTOR
null
Added optional parameter, so that user can get rid of useless index preserving. [Issue](https://github.com/huggingface/datasets/issues/3563)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3565/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3565", "html_url": "https://github.com/huggingface/datasets/pull/3565", "diff_url": "https://github.com/huggingface/datasets/pull/3565.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3565.patch", "merged_at": 1642003886000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3564/comments
https://api.github.com/repos/huggingface/datasets/issues/3564/events
https://github.com/huggingface/datasets/pull/3564
1,099,214,403
PR_kwDODunzps4wzSOL
3,564
Add the KMWP & DKTC dataset.
{ "login": "sooftware", "id": 42150335, "node_id": "MDQ6VXNlcjQyMTUwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sooftware", "html_url": "https://github.com/sooftware", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "organizations_url": "https://api.github.com/users/sooftware/orgs", "repos_url": "https://api.github.com/users/sooftware/repos", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "received_events_url": "https://api.github.com/users/sooftware/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I reflect your review. cc. @lhoestq ", "Ah sorry, I missed KMWP comment, wait.", "I request 2 new pull requests. #3569 #3570" ]
1,641,910,448,000
1,642,001,629,000
1,642,001,608,000
NONE
null
Add the DKTC dataset. - https://github.com/tunib-ai/DKTC
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3564/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3564", "html_url": "https://github.com/huggingface/datasets/pull/3564", "diff_url": "https://github.com/huggingface/datasets/pull/3564.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3564.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3563/comments
https://api.github.com/repos/huggingface/datasets/issues/3563/events
https://github.com/huggingface/datasets/issues/3563
1,099,070,368
I_kwDODunzps5Bgnug
3,563
Dataset.from_pandas preserves useless index
{ "login": "Sorrow321", "id": 20703486, "node_id": "MDQ6VXNlcjIwNzAzNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sorrow321", "html_url": "https://github.com/Sorrow321", "followers_url": "https://api.github.com/users/Sorrow321/followers", "following_url": "https://api.github.com/users/Sorrow321/following{/other_user}", "gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions", "organizations_url": "https://api.github.com/users/Sorrow321/orgs", "repos_url": "https://api.github.com/users/Sorrow321/repos", "events_url": "https://api.github.com/users/Sorrow321/events{/privacy}", "received_events_url": "https://api.github.com/users/Sorrow321/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change. " ]
1,641,902,827,000
1,642,003,887,000
1,642,003,887,000
CONTRIBUTOR
null
## Describe the bug Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this: ``` import pandas as pd from datasets import Dataset df = pd.read_csv('some_dataset.csv') # Some DataFrame preprocessing code... dataset = Dataset.from_pandas(df) ``` If your preprocessing code contain indexing operations like this: ``` df = df[df.col1 == some_value] ``` then your df.index can be changed from (default) ```RangeIndex(start=0, stop=16590, step=1)``` to something like this ```Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 83979, 83980, 83981, 83982, 83983, 83984, 83985, 83986, 83987, 83988], dtype='int64', length=16590)``` In this case, PyArrow (by default) will preserve this non-standard index. In the result, your dataset object will have the extra field that you likely don't want to have: '__index_level_0__'. You can easily fix this by just adding extra argument ```preserve_index=False``` to call of ```InMemoryTable.from_pandas``` in ```arrow_dataset.py```. If you approve that this isn't desirable behavior, I can make a PR fixing that. ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.11.0-44-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3563/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3562/comments
https://api.github.com/repos/huggingface/datasets/issues/3562/events
https://github.com/huggingface/datasets/pull/3562
1,098,341,351
PR_kwDODunzps4wwa44
3,562
Allow multiple task templates of the same type
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,846,727,000
1,641,910,607,000
1,641,910,607,000
CONTRIBUTOR
null
Add support for multiple task templates of the same type. Fixes (partially) #2520. CC: @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3562/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3562", "html_url": "https://github.com/huggingface/datasets/pull/3562", "diff_url": "https://github.com/huggingface/datasets/pull/3562.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3562.patch", "merged_at": 1641910606000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3561/comments
https://api.github.com/repos/huggingface/datasets/issues/3561/events
https://github.com/huggingface/datasets/issues/3561
1,098,328,870
I_kwDODunzps5Bdysm
3,561
Cannot load ‘bookcorpusopen’
{ "login": "HUIYINXUE", "id": 54684403, "node_id": "MDQ6VXNlcjU0Njg0NDAz", "avatar_url": "https://avatars.githubusercontent.com/u/54684403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HUIYINXUE", "html_url": "https://github.com/HUIYINXUE", "followers_url": "https://api.github.com/users/HUIYINXUE/followers", "following_url": "https://api.github.com/users/HUIYINXUE/following{/other_user}", "gists_url": "https://api.github.com/users/HUIYINXUE/gists{/gist_id}", "starred_url": "https://api.github.com/users/HUIYINXUE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HUIYINXUE/subscriptions", "organizations_url": "https://api.github.com/users/HUIYINXUE/orgs", "repos_url": "https://api.github.com/users/HUIYINXUE/repos", "events_url": "https://api.github.com/users/HUIYINXUE/events{/privacy}", "received_events_url": "https://api.github.com/users/HUIYINXUE/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/))\r\n\r\nFinding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset some time ago.\r\n\r\nThere are community-created versions of BookCorpus, such as the files hosted in the link below.\r\nhttps://battle.shawwn.com/sdb/bookcorpus/\r\n\r\nAnd more discussion here:\r\nhttps://github.com/soskek/bookcorpus\r\n\r\nDo we want to remove this dataset entirely? There's a fair argument for this, given that the official BookCorpus dataset was taken down by the authors. If not, perhaps can open a PR with the link to the community-created tar above and updated dataset description.", "Hi! The `bookcorpusopen` dataset is not working for the same reason as explained in this comment: https://github.com/huggingface/datasets/issues/3504#issuecomment-1004564980", "Hi @HUIYINXUE, it should work now that the data owners created a mirror server with all data, and we updated the URL in our library." ]
1,641,845,838,000
1,644,830,367,000
1,644,830,327,000
NONE
null
## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz ## Environment info - `datasets` version: 1.9.0 - Platform: Linux version 3.10.0-1160.45.1.el7.x86_64 - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3561/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3560/comments
https://api.github.com/repos/huggingface/datasets/issues/3560/events
https://github.com/huggingface/datasets/pull/3560
1,098,280,652
PR_kwDODunzps4wwOMf
3,560
Run pyupgrade for Python 3.6+
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for the change :)\r\nCould it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.", "> Hi ! Thanks for the change :)\r\n> Could it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.\r\n\r\nI reverted the changes in `datasets/` instead of changing only `src/`. Does it sound good?", "I just resolved some conflicts with the master branch. If the CI is green we can merge :)" ]
1,641,842,453,000
1,643,636,329,000
1,643,621,854,000
CONTRIBUTOR
null
Run the command: ```bash pyupgrade $(find . -name "*.py" -type f) --py36-plus ``` Which mainly avoids unnecessary lists creations and also removes unnecessary code for Python 3.6+. It was originally part of #3489. Tip for reviewing faster: use the CLI (`git diff`) and scroll.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3560/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3560", "html_url": "https://github.com/huggingface/datasets/pull/3560", "diff_url": "https://github.com/huggingface/datasets/pull/3560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3560.patch", "merged_at": 1643621854000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3559
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3559/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3559/comments
https://api.github.com/repos/huggingface/datasets/issues/3559/events
https://github.com/huggingface/datasets/pull/3559
1,098,178,222
PR_kwDODunzps4wv420
3,559
Fix `DuplicatedKeysError` and improve card in `tweet_qa`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,835,660,000
1,642,000,438,000
1,642,000,437,000
CONTRIBUTOR
null
Fix #3555
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3559/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3559", "html_url": "https://github.com/huggingface/datasets/pull/3559", "diff_url": "https://github.com/huggingface/datasets/pull/3559.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3559.patch", "merged_at": 1642000436000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3558/comments
https://api.github.com/repos/huggingface/datasets/issues/3558/events
https://github.com/huggingface/datasets/issues/3558
1,098,025,866
I_kwDODunzps5BcouK
3,558
Integrate Milvus (pymilvus) library
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "xiaofan-luan", "id": 83447078, "node_id": "MDQ6VXNlcjgzNDQ3MDc4", "avatar_url": "https://avatars.githubusercontent.com/u/83447078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaofan-luan", "html_url": "https://github.com/xiaofan-luan", "followers_url": "https://api.github.com/users/xiaofan-luan/followers", "following_url": "https://api.github.com/users/xiaofan-luan/following{/other_user}", "gists_url": "https://api.github.com/users/xiaofan-luan/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiaofan-luan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaofan-luan/subscriptions", "organizations_url": "https://api.github.com/users/xiaofan-luan/orgs", "repos_url": "https://api.github.com/users/xiaofan-luan/repos", "events_url": "https://api.github.com/users/xiaofan-luan/events{/privacy}", "received_events_url": "https://api.github.com/users/xiaofan-luan/received_events", "type": "User", "site_admin": false }
[ { "login": "xiaofan-luan", "id": 83447078, "node_id": "MDQ6VXNlcjgzNDQ3MDc4", "avatar_url": "https://avatars.githubusercontent.com/u/83447078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaofan-luan", "html_url": "https://github.com/xiaofan-luan", "followers_url": "https://api.github.com/users/xiaofan-luan/followers", "following_url": "https://api.github.com/users/xiaofan-luan/following{/other_user}", "gists_url": "https://api.github.com/users/xiaofan-luan/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiaofan-luan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaofan-luan/subscriptions", "organizations_url": "https://api.github.com/users/xiaofan-luan/orgs", "repos_url": "https://api.github.com/users/xiaofan-luan/repos", "events_url": "https://api.github.com/users/xiaofan-luan/events{/privacy}", "received_events_url": "https://api.github.com/users/xiaofan-luan/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @mariosasko,Just search randomly and I found this issue~ I'm the tech lead of Milvus and we are looking forward to integrate milvus together with huggingface datasets.\r\n\r\nAny suggestion on how we could start?\r\n", "Feel free to assign to me and we probably need some guide on it", "@mariosasko any updates my man?\r\n", "Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.", "> Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.\r\n\r\nSure, we take a look and do some research" ]
1,641,828,029,000
1,646,483,316,000
null
CONTRIBUTOR
null
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3558/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3557
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3557/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3557/comments
https://api.github.com/repos/huggingface/datasets/issues/3557/events
https://github.com/huggingface/datasets/pull/3557
1,097,946,034
PR_kwDODunzps4wvIHl
3,557
Fix bug in `ImageClassifcation` task template
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The CI failures are unrelated to the changes in this PR.", "> The CI failures are unrelated to the changes in this PR.\r\n\r\nIt seems that some of the failures are due to the tests on the dataset cards (e.g. CIFAR, MNIST, FASHION_MNIST). Perhaps it's worth addressing those in this PR to avoid confusing downstream developers who branch off `master` and suddenly have a failing CI?", "@lewtun We only run these tests against the modified datasets on the PR branch, so this will not lead to errors after merging." ]
1,641,823,799,000
1,641,916,072,000
1,641,916,072,000
CONTRIBUTOR
null
Fixes a bug in the `ImageClassification` task template which requires specifying class labels twice in dataset scripts. Additionally, this PR refactors the API around the classification task templates for cleaner `labels` handling. CC: @lewtun @nateraw
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3557/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3557", "html_url": "https://github.com/huggingface/datasets/pull/3557", "diff_url": "https://github.com/huggingface/datasets/pull/3557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3557.patch", "merged_at": 1641916072000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3556/comments
https://api.github.com/repos/huggingface/datasets/issues/3556/events
https://github.com/huggingface/datasets/pull/3556
1,097,907,724
PR_kwDODunzps4wvALx
3,556
Preserve encoding/decoding with features in `Iterable.map` call
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,821,540,000
1,642,535,648,000
1,642,535,647,000
CONTRIBUTOR
null
As described in https://github.com/huggingface/datasets/issues/3505#issuecomment-1004755657, this PR uses a generator expression to encode/decode examples with `features` (which are set to None in `map`) before applying a map transform. Fix #3505
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3556/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3556", "html_url": "https://github.com/huggingface/datasets/pull/3556", "diff_url": "https://github.com/huggingface/datasets/pull/3556.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3556.patch", "merged_at": 1642535647000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3555
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3555/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3555/comments
https://api.github.com/repos/huggingface/datasets/issues/3555/events
https://github.com/huggingface/datasets/issues/3555
1,097,736,982
I_kwDODunzps5BbiMW
3,555
DuplicatedKeysError when loading tweet_qa dataset
{ "login": "LeonieWeissweiler", "id": 30300891, "node_id": "MDQ6VXNlcjMwMzAwODkx", "avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeonieWeissweiler", "html_url": "https://github.com/LeonieWeissweiler", "followers_url": "https://api.github.com/users/LeonieWeissweiler/followers", "following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}", "gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}", "starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions", "organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs", "repos_url": "https://api.github.com/users/LeonieWeissweiler/repos", "events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}", "received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, we've just merged the PR with the fix. The fixed version of the dataset can be downloaded as follows:\r\n```python\r\nimport datasets\r\ndset = datasets.load_dataset(\"tweet_qa\", revision=\"master\")\r\n```" ]
1,641,811,991,000
1,642,000,653,000
1,642,000,436,000
NONE
null
When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs: `DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e Keys should be unique and deterministic in nature ` Might be related to issues #2433 and #2333 - `datasets` version: 1.17.0 - Python version: 3.8.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3555/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3554/comments
https://api.github.com/repos/huggingface/datasets/issues/3554/events
https://github.com/huggingface/datasets/issues/3554
1,097,711,367
I_kwDODunzps5Bbb8H
3,554
ImportError: cannot import name 'is_valid_waiter_error'
{ "login": "danielbellhv", "id": 84714841, "node_id": "MDQ6VXNlcjg0NzE0ODQx", "avatar_url": "https://avatars.githubusercontent.com/u/84714841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danielbellhv", "html_url": "https://github.com/danielbellhv", "followers_url": "https://api.github.com/users/danielbellhv/followers", "following_url": "https://api.github.com/users/danielbellhv/following{/other_user}", "gists_url": "https://api.github.com/users/danielbellhv/gists{/gist_id}", "starred_url": "https://api.github.com/users/danielbellhv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danielbellhv/subscriptions", "organizations_url": "https://api.github.com/users/danielbellhv/orgs", "repos_url": "https://api.github.com/users/danielbellhv/repos", "events_url": "https://api.github.com/users/danielbellhv/events{/privacy}", "received_events_url": "https://api.github.com/users/danielbellhv/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! I can't reproduce this error in Colab, but I'm assuming you are using Amazon SageMaker Studio Notebooks (you mention the `conda_pytorch_p36` kernel), so maybe @philschmid knows more about what might be causing this issue? ", "Hey @mariosasko. Yes, I am using **Amazon SageMaker Studio Jupyter Labs**. However, I no longer need this notebook; but it would be nice to have this problem solved for others. So don't stress too much if you two can't reproduce error.", "Hey @danielbellhv, \r\n\r\nThis issue might be related to Studio probably not having an up to date `botocore` and `boto3` version. I ran into this as well a while back. My workaround was \r\n```python\r\n# using older dataset due to incompatibility of sagemaker notebook & aws-cli with > s3fs and fsspec to >= 2021.10\r\n!pip install \"datasets==1.13\" --upgrade\r\n```\r\n\r\nIn `datasets` we use the latest `s3fs` and `fsspec` but aws-cli and notebook is not supporting this. You could also update the `aws-cli` and associated packages to get the latest `datasets` version\r\n" ]
1,641,810,724,000
1,644,831,357,000
1,644,831,357,000
NONE
null
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.17.0) Requirement already satisfied: transformers in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (4.15.0) Requirement already satisfied: optimum[intel] in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (0.1.3) Requirement already satisfied: numpy>=1.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.19.5) Requirement already satisfied: dill in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.3.4) Requirement already satisfied: tqdm>=4.62.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.62.3) Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.2.1) Requirement already satisfied: packaging in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (21.3) Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (6.0.1) Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.1.5) Requirement already satisfied: xxhash in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.0.2) Requirement already satisfied: aiohttp in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (3.8.1) Requirement already satisfied: fsspec[http]>=2021.05.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2021.11.1) Requirement already satisfied: dataclasses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.8) Requirement already satisfied: multiprocess in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.70.12.2) Requirement already satisfied: importlib-metadata in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.5.0) Requirement already satisfied: requests>=2.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.25.1) Requirement already satisfied: pyyaml>=5.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (5.4.1) Requirement already satisfied: regex!=2019.12.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (2021.4.4) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.10.3) Requirement already satisfied: filelock in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (3.0.12) Requirement already satisfied: sacremoses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.0.46) Requirement already satisfied: torch>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.10.1) Requirement already satisfied: sympy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.8) Requirement already satisfied: coloredlogs in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (15.0.1) Requirement already satisfied: pycocotools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (2.0.3) Requirement already satisfied: neural-compressor>=1.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.9) Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.10.0.0) Requirement already satisfied: sigopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.2.0) Requirement already satisfied: opencv-python in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (4.5.1.48) Requirement already satisfied: cryptography in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.4.7) Requirement already satisfied: py-cpuinfo in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.0.0) Requirement already satisfied: gevent in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (21.1.2) Requirement already satisfied: schema in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.7.5) Requirement already satisfied: psutil in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.8.0) Requirement already satisfied: gevent-websocket in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.10.1) Requirement already satisfied: hyperopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.2.7) Requirement already satisfied: Flask in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: prettytable in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.5.0) Requirement already satisfied: Flask-SocketIO in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.1.1) Requirement already satisfied: scikit-learn in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.24.2) Requirement already satisfied: Pillow in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.4.0) Requirement already satisfied: Flask-Cors in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.0.10) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging->datasets) (2.4.7) Requirement already satisfied: chardet<5,>=3.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (4.0.0) Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2021.5.30) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (1.26.5) Requirement already satisfied: idna<3,>=2.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2.10) Requirement already satisfied: yarl<2.0,>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.6.3) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (2.0.9) Requirement already satisfied: attrs>=17.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (21.2.0) Requirement already satisfied: asynctest==0.13.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (0.13.0) Requirement already satisfied: idna-ssl>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.1.0) Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (4.0.1) Requirement already satisfied: aiosignal>=1.1.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0) Requirement already satisfied: frozenlist>=1.1.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0) Requirement already satisfied: multidict<7.0,>=4.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (5.1.0) Requirement already satisfied: humanfriendly>=9.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from coloredlogs->optimum[intel]) (10.0) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata->datasets) (3.4.1) Requirement already satisfied: python-dateutil>=2.7.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2021.1) Requirement already satisfied: matplotlib>=2.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (3.3.4) Requirement already satisfied: cython>=0.27.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (0.29.23) Requirement already satisfied: setuptools>=18.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (52.0.0.post20210125) Requirement already satisfied: joblib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.0.1) Requirement already satisfied: click in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (8.0.1) Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.16.0) Requirement already satisfied: mpmath>=0.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sympy->optimum[intel]) (1.2.1) Requirement already satisfied: kiwisolver>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (1.3.1) Requirement already satisfied: cycler>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (0.10.0) Requirement already satisfied: cffi>=1.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cryptography->neural-compressor>=1.7->optimum[intel]) (1.14.5) Requirement already satisfied: Werkzeug>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.2) Requirement already satisfied: Jinja2>=3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (3.0.1) Requirement already satisfied: itsdangerous>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: python-socketio>=5.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (5.5.0) Requirement already satisfied: zope.event in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (4.5.0) Requirement already satisfied: greenlet<2.0,>=0.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (1.1.0) Requirement already satisfied: zope.interface in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (5.4.0) Requirement already satisfied: future in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.18.2) Requirement already satisfied: cloudpickle in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.6.0) Requirement already satisfied: networkx>=2.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (2.5) Requirement already satisfied: scipy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.5.3) Requirement already satisfied: py4j in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.10.7) Requirement already satisfied: wcwidth in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from prettytable->neural-compressor>=1.7->optimum[intel]) (0.2.5) Requirement already satisfied: contextlib2>=0.5.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from schema->neural-compressor>=1.7->optimum[intel]) (0.6.0.post1) Requirement already satisfied: threadpoolctl>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from scikit-learn->neural-compressor>=1.7->optimum[intel]) (2.1.0) Requirement already satisfied: pyOpenSSL>=20.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (20.0.1) Requirement already satisfied: pypng>=0.0.20 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.0.21) Requirement already satisfied: kubernetes<13.0.0,>=12.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (12.0.1) Requirement already satisfied: rsa<5.0.0,>=4.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.7.2) Requirement already satisfied: boto3<2.0.0,==1.16.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.16.34) Requirement already satisfied: Pint<0.17.0,>=0.16.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.16.1) Requirement already satisfied: GitPython>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.18) Requirement already satisfied: backoff<2.0.0,>=1.10.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.11.1) Requirement already satisfied: ipython>=5.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (7.16.1) Requirement already satisfied: docker<5.0.0,>=4.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.4.4) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.3.7) Requirement already satisfied: botocore<1.20.0,>=1.19.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (1.19.63) Requirement already satisfied: pycparser in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cffi>=1.12->cryptography->neural-compressor>=1.7->optimum[intel]) (2.20) Requirement already satisfied: websocket-client>=0.32.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from docker<5.0.0,>=4.4.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.58.0) Requirement already satisfied: gitdb<5,>=4.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.0.9) Requirement already satisfied: traitlets>=4.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.3.3) Requirement already satisfied: jedi>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.17.2) Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (3.0.19) Requirement already satisfied: backcall in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0) Requirement already satisfied: pygments in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (2.9.0) Requirement already satisfied: pexpect in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.8.0) Requirement already satisfied: decorator in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.9) Requirement already satisfied: pickleshare in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.5) Requirement already satisfied: MarkupSafe>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Jinja2>=3.0->Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: google-auth>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.30.2) Requirement already satisfied: requests-oauthlib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.3.0) Requirement already satisfied: importlib-resources in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Pint<0.17.0,>=0.16.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.4.0) Requirement already satisfied: python-engineio>=4.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (4.3.0) Requirement already satisfied: bidict>=0.21.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (0.21.4) Requirement already satisfied: pyasn1>=0.1.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from rsa<5.0.0,>=4.7->sigopt->neural-compressor>=1.7->optimum[intel]) (0.4.8) Requirement already satisfied: smmap<6,>=3.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gitdb<5,>=4.0.1->GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.0) Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.8) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (4.2.2) Requirement already satisfied: parso<0.8.0,>=0.7.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from jedi>=0.10->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.1) Requirement already satisfied: ipython-genutils in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from traitlets>=4.2->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0) Requirement already satisfied: ptyprocess>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pexpect->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.0) Requirement already satisfied: oauthlib>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests-oauthlib->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.1) ``` --- **Cell:** ```python from datasets import load_dataset, load_metric ``` OR ```python import datasets ``` **Traceback:** ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-7-34fb7ba3338d> in <module> ----> 1 from datasets import load_dataset, load_metric ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/__init__.py in <module> 32 ) 33 ---> 34 from .arrow_dataset import Dataset, concatenate_datasets 35 from .arrow_reader import ArrowReader, ReadInstruction 36 from .arrow_writer import ArrowWriter ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_dataset.py in <module> 59 from . import config, utils 60 from .arrow_reader import ArrowReader ---> 61 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 62 from .features import ClassLabel, Features, FeatureType, Sequence, Value, _ArrayXD, pandas_types_mapper 63 from .filesystems import extract_path_from_uri, is_remote_filesystem ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_writer.py in <module> 26 27 from . import config, utils ---> 28 from .features import ( 29 Features, 30 ImageExtensionType, ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/__init__.py in <module> 1 # flake8: noqa ----> 2 from .audio import Audio 3 from .features import * 4 from .features import ( 5 _ArrayXD, ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/audio.py in <module> 5 import pyarrow as pa 6 ----> 7 from ..utils.streaming_download_manager import xopen 8 9 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/utils/streaming_download_manager.py in <module> 16 17 from .. import config ---> 18 from ..filesystems import COMPRESSION_FILESYSTEMS 19 from .download_manager import DownloadConfig, map_nested 20 from .file_utils import ( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/__init__.py in <module> 11 12 if _has_s3fs: ---> 13 from .s3filesystem import S3FileSystem # noqa: F401 14 15 COMPRESSION_FILESYSTEMS: List[compression.BaseCompressedFileFileSystem] = [ ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py in <module> ----> 1 import s3fs 2 3 4 class S3FileSystem(s3fs.S3FileSystem): 5 """ ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/__init__.py in <module> ----> 1 from .core import S3FileSystem, S3File 2 from .mapping import S3Map 3 4 from ._version import get_versions 5 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/core.py in <module> 12 from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper 13 ---> 14 import aiobotocore 15 import botocore 16 import aiobotocore.session ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/__init__.py in <module> ----> 1 from .session import get_session, AioSession 2 3 __all__ = ['get_session', 'AioSession'] 4 __version__ = '1.3.0' ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/session.py in <module> 4 from botocore import retryhandler, translate 5 from botocore.exceptions import PartialCredentialsError ----> 6 from .client import AioClientCreator, AioBaseClient 7 from .hooks import AioHierarchicalEmitter 8 from .parsers import AioResponseParserFactory ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/client.py in <module> 11 from .args import AioClientArgsCreator 12 from .utils import AioS3RegionRedirector ---> 13 from . import waiter 14 15 history_recorder = get_global_history_recorder() ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/waiter.py in <module> 4 from botocore.exceptions import ClientError 5 from botocore.waiter import WaiterModel # noqa: F401, lgtm[py/unused-import] ----> 6 from botocore.waiter import Waiter, xform_name, logger, WaiterError, \ 7 NormalizedOperationMethod as _NormalizedOperationMethod, is_valid_waiter_error 8 from botocore.docs.docstring import WaiterDocstring ImportError: cannot import name 'is_valid_waiter_error' ``` Please let me know if there's anything else I can add to post. [1]: https://github.com/huggingface/notebooks/blob/master/examples/text_classification_quantization_inc.ipynb
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3554/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3553/comments
https://api.github.com/repos/huggingface/datasets/issues/3553/events
https://github.com/huggingface/datasets/issues/3553
1,097,252,275
I_kwDODunzps5BZr2z
3,553
set_format("np") no longer works for Image data
{ "login": "cgarciae", "id": 5862228, "node_id": "MDQ6VXNlcjU4NjIyMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cgarciae", "html_url": "https://github.com/cgarciae", "followers_url": "https://api.github.com/users/cgarciae/followers", "following_url": "https://api.github.com/users/cgarciae/following{/other_user}", "gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}", "starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions", "organizations_url": "https://api.github.com/users/cgarciae/orgs", "repos_url": "https://api.github.com/users/cgarciae/repos", "events_url": "https://api.github.com/users/cgarciae/events{/privacy}", "received_events_url": "https://api.github.com/users/cgarciae/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]", "This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndataset = datasets.load_dataset(\"mnist\")\r\ndataset.set_format(\"jax\")\r\nX_train = dataset[\"train\"][\"image\"]\r\n```", "Hi! We've recently introduced a new Image feature that yields PIL Images (and caches transforms on them) instead of arrays.\r\n\r\nHowever, this feature requires a custom transform to yield np arrays directly:\r\n```python\r\nddict = datasets.load_dataset(\"mnist\")\r\n\r\ndef pil_image_to_array(batch):\r\n return {\"image\": [np.array(img) for img in batch[\"image\"]]} # or jnp.array(img) for Jax\r\n\r\nddict.set_transform(pil_image_to_array, columns=\"image\", output_all_columns=True)\r\n```\r\n\r\n[Docs](https://huggingface.co/docs/datasets/master/process.html#format-transform) on `set_transform`.\r\n\r\nAlso, the approach proposed by @cgarciae is not the best because it loads the entire column in memory.\r\n\r\n@albertvillanova @lhoestq WDYT? The Audio and the Image feature currently don't support the TF/Jax/PT Formatters, but for the Numpy Formatter maybe it makes more sense to return np arrays (and not a dict in the case of the Audio feature or a PIL Image object in the case of the Image feature).", "Yes I agree it should return arrays and not a PIL image (and possible an array instead of a dict for audio data).\r\nI'm currently finishing some code refactoring of the image and audio and opening a PR today. Maybe we can look into that after the refactoring" ]
1,641,748,693,000
1,642,081,166,000
null
NONE
null
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format("np")` seems to have no effect and the dataset just returns a list/array of PIL images instead of numpy arrays as requested.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3553/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3553/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3552/comments
https://api.github.com/repos/huggingface/datasets/issues/3552/events
https://github.com/huggingface/datasets/pull/3552
1,096,985,204
PR_kwDODunzps4wsM29
3,552
Add the KMWP & DKTC dataset.
{ "login": "sooftware", "id": 42150335, "node_id": "MDQ6VXNlcjQyMTUwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sooftware", "html_url": "https://github.com/sooftware", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "organizations_url": "https://api.github.com/users/sooftware/orgs", "repos_url": "https://api.github.com/users/sooftware/repos", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "received_events_url": "https://api.github.com/users/sooftware/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,661,934,000
1,641,910,410,000
1,641,910,410,000
NONE
null
Add the KMWP & DKTC dataset. Additional notes: - Both datasets will be released on January 10 through the GitHub link below. - https://github.com/tunib-ai/DKTC - https://github.com/tunib-ai/KMWP - So it doesn't work as a link at the moment, but the code will work soon (after it is released on January 10).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3552/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3552", "html_url": "https://github.com/huggingface/datasets/pull/3552", "diff_url": "https://github.com/huggingface/datasets/pull/3552.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3552.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3551/comments
https://api.github.com/repos/huggingface/datasets/issues/3551/events
https://github.com/huggingface/datasets/pull/3551
1,096,561,111
PR_kwDODunzps4wq_AO
3,551
Add more compression types for `to_json`
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq, I looked into how to compress with `zipfile` for which few methods exist, let me know which one looks good:\r\n1. create the file in normal `wb` mode and then zip it separately\r\n2. use `ZipFile.write_str` to write file into the archive. For this we'll need to change how we're writing files from `_write` method \r\n\r\nHow `pandas` handles it is that they have created a wrapper for standard library class `ZipFile` and allow the returned file-like handle to accept byte strings via `write` method instead of `write_str` (purpose was to change the name of function by creating that wrapper)", "1. sounds not ideal since it creates an intermediary file.\r\nI like pandas' approach. Is it possible to implement 2. using the pandas class ? Or maybe we can have something similar ?", "Definitely, @lhoestq! I've adapted that from original code and turns out it is faster than `gz` compression. Apart from that I've also added `infer` option to automatically infer compression type from `path_or_buf` given", "One small thing, currently I'm assuming that user will provide compression extension in `path_or_buf`. Is it this also possible?\r\n`dataset.to_json(\"from_dataset.json\", compression=\"zip\")`? \r\nShould I put an `assert` to ensure the file name provided always has a compression extension?", "Thanks !\r\n\r\n> One small thing, currently I'm assuming that user will provide compression extension in path_or_buf. Is it this also possible?\r\n>dataset.to_json(\"from_dataset.json\", compression=\"zip\")?\r\n>Should I put an assert to ensure the file name provided always has a compression extension?\r\n\r\nI think it's fine as it is right now :) No need to check the extension of the filename passed to `path_or_buf`.\r\n", "> turns out it is faster than gz compression\r\n\r\nI think the default compression level of `gzip` is 9 in python, which is very slow. Maybe we can switch to compression level 6 instead which is faster, like the `gzip` command on unix", "I found that `fsspec` has something that may interest you: [fsspec.open(..., compression=...)](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open). I don't remember if we've already mentioned it or not\r\n\r\nIt also has `zip` if I understand correctly ! see https://github.com/fsspec/filesystem_spec/blob/master/fsspec/compression.py#L70\r\n\r\nSince `fsspec` is a dependency of `datasets` we can use all this :)\r\n\r\nLet me know if you prefer using `fsspec` instead (I haven't tested this yet to write compressed files). IMO it sounds pretty easy to use and it would make the code base simpler", "Just tried `fsspec` but I'm not able to write compressed `zip` files :/\r\n`gzip`, `xz`, `bz2` are all working fine and it's really simple (no need for `FileWriteHandler` now!)" ]
1,641,579,902,000
1,645,459,095,000
1,645,459,095,000
CONTRIBUTOR
null
This PR adds `bz2`, `xz`, and `zip` (WIP) for `to_json`. I also plan to add `infer` like how `pandas` does it
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3551/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3551", "html_url": "https://github.com/huggingface/datasets/pull/3551", "diff_url": "https://github.com/huggingface/datasets/pull/3551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3551.patch", "merged_at": 1645459095000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3550/comments
https://api.github.com/repos/huggingface/datasets/issues/3550/events
https://github.com/huggingface/datasets/issues/3550
1,096,522,377
I_kwDODunzps5BW5qJ
3,550
Bug in `openbookqa` dataset
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "repos_url": "https://api.github.com/users/lucadiliello/repos", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,576,777,000
1,642,425,393,000
null
CONTRIBUTOR
null
## Describe the bug Dataset entries contains a typo. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> obqa = load_dataset('openbookqa', 'main') >>> obqa['train'][0] ``` ## Expected results ```python {'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['A', 'B', 'C', 'D']}, 'answerKey': 'D'} ``` ## Actual results ```python {'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting']}, 'answerKey': 'D'} ``` The bug is present in all configs and all splits. ## Environment info - `datasets` version: 1.17.0 - Platform: Linux-5.4.0-1057-aws-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3550/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3549/comments
https://api.github.com/repos/huggingface/datasets/issues/3549/events
https://github.com/huggingface/datasets/pull/3549
1,096,426,996
PR_kwDODunzps4wqkGt
3,549
Fix sem_eval_2018_task_1 download location
{ "login": "maxpel", "id": 31095360, "node_id": "MDQ6VXNlcjMxMDk1MzYw", "avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxpel", "html_url": "https://github.com/maxpel", "followers_url": "https://api.github.com/users/maxpel/followers", "following_url": "https://api.github.com/users/maxpel/following{/other_user}", "gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}", "starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxpel/subscriptions", "organizations_url": "https://api.github.com/users/maxpel/orgs", "repos_url": "https://api.github.com/users/maxpel/repos", "events_url": "https://api.github.com/users/maxpel/events{/privacy}", "received_events_url": "https://api.github.com/users/maxpel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for pushing this :)\r\n\r\nIt seems that you created this PR from an old version of `datasets` that didn't have the sem_eval_2018_task_1.py file.\r\n\r\nCan you try merging `master` into your branch ? Or re-create your PR from a branch that comes from a more recent version of `datasets` ?\r\n\r\nAnd sorry for the late response !", "Hi! No problem! I made the new branch like you said and opened https://github.com/huggingface/datasets/pull/3643 for it. I will close this one." ]
1,641,569,872,000
1,643,298,723,000
1,643,298,723,000
CONTRIBUTOR
null
This changes the download location of sem_eval_2018_task_1 files to include the test set labels as discussed in https://github.com/huggingface/datasets/issues/2745#issuecomment-954588500_ with @lhoestq.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3549/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3549", "html_url": "https://github.com/huggingface/datasets/pull/3549", "diff_url": "https://github.com/huggingface/datasets/pull/3549.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3549.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3548/comments
https://api.github.com/repos/huggingface/datasets/issues/3548/events
https://github.com/huggingface/datasets/issues/3548
1,096,409,512
I_kwDODunzps5BWeGo
3,548
Specify the feature types of a dataset on the Hub without needing a dataset script
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "abidlabs", "id": 1778297, "node_id": "MDQ6VXNlcjE3NzgyOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abidlabs", "html_url": "https://github.com/abidlabs", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "repos_url": "https://api.github.com/users/abidlabs/repos", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "type": "User", "site_admin": false }
[ { "login": "abidlabs", "id": 1778297, "node_id": "MDQ6VXNlcjE3NzgyOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abidlabs", "html_url": "https://github.com/abidlabs", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "repos_url": "https://api.github.com/users/abidlabs/repos", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "type": "User", "site_admin": false } ]
null
[ "After looking into this, discovered that this is already supported if the `dataset_infos.json` file is configured correctly! Here is a working example: https://huggingface.co/datasets/abidlabs/test-audio-13\r\n\r\nThis should be probably be documented, though. " ]
1,641,568,626,000
1,642,690,118,000
1,642,690,118,000
MEMBER
null
**Is your feature request related to a problem? Please describe.** Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio. **Describe the solution you'd like** I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the features types I want. The feature types could read from the `dataset_infos.json` for example. **Describe alternatives you've considered** Create a dataset script to specify the features, but that seems complicated for a simple thing. cc @abidlabs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3548/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3548/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3547/comments
https://api.github.com/repos/huggingface/datasets/issues/3547/events
https://github.com/huggingface/datasets/issues/3547
1,096,405,515
I_kwDODunzps5BWdIL
3,547
Datasets created with `push_to_hub` can't be accessed in offline mode
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it" ]
1,641,568,345,000
1,641,811,484,000
null
MEMBER
null
## Describe the bug In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`. ## Steps to reproduce the bug in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` in bash: ``` export HF_DATASETS_OFFLINE=1 ``` in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` ## Expected results `datasets` should find the previously-cached dataset. ## Actual results ConnectionError: Couln't reach the Hugging Face Hub for dataset 'teven/matched_passages_wikidata': Offline mode is enabled ## Environment info - `datasets` version: 1.16.2.dev0 - Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3547/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3546/comments
https://api.github.com/repos/huggingface/datasets/issues/3546/events
https://github.com/huggingface/datasets/pull/3546
1,096,367,684
PR_kwDODunzps4wqYIV
3,546
Remove print statements in datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The CI failures are unrelated to the changes." ]
1,641,565,824,000
1,641,578,956,000
1,641,578,955,000
CONTRIBUTOR
null
This is a second time I'm removing print statements in our datasets, so I've added a test to avoid these issues in the future.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3546/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3546", "html_url": "https://github.com/huggingface/datasets/pull/3546", "diff_url": "https://github.com/huggingface/datasets/pull/3546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3546.patch", "merged_at": 1641578955000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3545/comments
https://api.github.com/repos/huggingface/datasets/issues/3545/events
https://github.com/huggingface/datasets/pull/3545
1,096,189,889
PR_kwDODunzps4wpziv
3,545
fix: 🐛 pass token when retrieving the split names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Currently, it does not work with https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/common_voice_7_0.py#L146 (which was the goal), because `dl_manager.download_config.use_auth_token` is ignored, and the authentication is required to be use `huggingface-cli login`.\r\nIn my use case (dataset viewer), I'd prefer to use a specific \"User Token Access\", with only the \"read\" role (https://huggingface.co/settings/token).\r\n\r\nSee https://github.com/huggingface/datasets-preview-backend/issues/74#issuecomment-1007316853 for the context", "> Simply passing download_config is ok :)\r\n\r\nhmm, I prefer only passing use_auth_token. But the question is more: is it correct, in the (convoluted) case if `download_config.use_auth_token` exists and is different from `use_auth_token`? Which one should be used?", "If both are passed, `use_auth_token` should have the priority (more specific parameters have the higher priority)" ]
1,641,551,362,000
1,641,811,907,000
1,641,811,906,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3545/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3545", "html_url": "https://github.com/huggingface/datasets/pull/3545", "diff_url": "https://github.com/huggingface/datasets/pull/3545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3545.patch", "merged_at": 1641811906000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3544/comments
https://api.github.com/repos/huggingface/datasets/issues/3544/events
https://github.com/huggingface/datasets/issues/3544
1,095,784,681
I_kwDODunzps5BUFjp
3,544
Ability to split a dataset in multiple files.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,641,510,145,000
1,641,510,145,000
null
CONTRIBUTOR
null
Hello, **Is your feature request related to a problem? Please describe.** My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset. I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries. **Describe the solution you'd like** I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns. **Describe alternatives you've considered** I currently need to 1. Save multiple "versions" of the dataset and load the latest. 2. Try working with cache files to get the latest columns. **Additional context** I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box! I can make a PR myself with some pointers as needed :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3544/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3543/comments
https://api.github.com/repos/huggingface/datasets/issues/3543/events
https://github.com/huggingface/datasets/issues/3543
1,095,226,438
I_kwDODunzps5BR9RG
3,543
Allow loading community metrics from the hub, just like datasets
{ "login": "eladsegal", "id": 13485709, "node_id": "MDQ6VXNlcjEzNDg1NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eladsegal", "html_url": "https://github.com/eladsegal", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "repos_url": "https://api.github.com/users/eladsegal/repos", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/docs/hub/how-to-downstream.md#cached_download))", "This is a great solution in the meantime, thanks!", "Here's the code I used, in case it can be of help to someone else:\r\n```python\r\nimport os, shutil\r\nfrom huggingface_hub import hf_hub_download\r\ndef download_metric(repo_id, file_path):\r\n # repo_id: for models \"username/model_name\", for datasets \"datasets/username/model_name\"\r\n local_metric_path = hf_hub_download(repo_id=repo_id, filename=file_path)\r\n updated_local_metric_path = (os.path.dirname(local_metric_path) + os.path.basename(local_metric_path).replace(\".\", \"_\") + \".py\")\r\n shutil.copy(local_metric_path, updated_local_metric_path)\r\n return updated_local_metric_path\r\n\r\nmetric = load_metric(download_metric(REPO_ID, FILE_PATH))\r\n```" ]
1,641,468,386,000
1,641,760,093,000
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. This means that if I want to allow other users to use it, they must download it first which makes the usage less smooth. **Describe the solution you'd like** Load metrics from the hub just like datasets are loaded. In order to not break stuff, the convention can be to put the metric file in a "metrics" folder in the hub.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3543/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3543/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3542
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3542/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3542/comments
https://api.github.com/repos/huggingface/datasets/issues/3542/events
https://github.com/huggingface/datasets/pull/3542
1,095,088,485
PR_kwDODunzps4wmPIP
3,542
Update the CC-100 dataset card
{ "login": "aajanki", "id": 353043, "node_id": "MDQ6VXNlcjM1MzA0Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/353043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aajanki", "html_url": "https://github.com/aajanki", "followers_url": "https://api.github.com/users/aajanki/followers", "following_url": "https://api.github.com/users/aajanki/following{/other_user}", "gists_url": "https://api.github.com/users/aajanki/gists{/gist_id}", "starred_url": "https://api.github.com/users/aajanki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aajanki/subscriptions", "organizations_url": "https://api.github.com/users/aajanki/orgs", "repos_url": "https://api.github.com/users/aajanki/repos", "events_url": "https://api.github.com/users/aajanki/events{/privacy}", "received_events_url": "https://api.github.com/users/aajanki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,458,118,000
1,641,494,264,000
1,641,494,264,000
CONTRIBUTOR
null
* summary from the dataset homepage * more details about the data structure * this dataset does not contain annotations
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3542/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3542", "html_url": "https://github.com/huggingface/datasets/pull/3542", "diff_url": "https://github.com/huggingface/datasets/pull/3542.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3542.patch", "merged_at": 1641494264000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3541/comments
https://api.github.com/repos/huggingface/datasets/issues/3541/events
https://github.com/huggingface/datasets/issues/3541
1,095,033,828
I_kwDODunzps5BROPk
3,541
Support 7-zip compressed data files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "This should also resolve: https://github.com/huggingface/datasets/issues/3185." ]
1,641,453,063,000
1,642,600,878,000
null
MEMBER
null
**Is your feature request related to a problem? Please describe.** We should support 7-zip compressed data files: - in `extract` - in `iter_archive` both in streaming and non-streaming modes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3541/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3540/comments
https://api.github.com/repos/huggingface/datasets/issues/3540/events
https://github.com/huggingface/datasets/issues/3540
1,094,900,336
I_kwDODunzps5BQtpw
3,540
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
{ "login": "CindyTing", "id": 35062414, "node_id": "MDQ6VXNlcjM1MDYyNDE0", "avatar_url": "https://avatars.githubusercontent.com/u/35062414?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CindyTing", "html_url": "https://github.com/CindyTing", "followers_url": "https://api.github.com/users/CindyTing/followers", "following_url": "https://api.github.com/users/CindyTing/following{/other_user}", "gists_url": "https://api.github.com/users/CindyTing/gists{/gist_id}", "starred_url": "https://api.github.com/users/CindyTing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CindyTing/subscriptions", "organizations_url": "https://api.github.com/users/CindyTing/orgs", "repos_url": "https://api.github.com/users/CindyTing/repos", "events_url": "https://api.github.com/users/CindyTing/events{/privacy}", "received_events_url": "https://api.github.com/users/CindyTing/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,641,435,222,000
1,641,435,459,000
null
NONE
null
Hi, I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset. Here is an example. ``` from torch.utils.data import Dataset from datasets.arrow_dataset import Dataset as HFDataset class ADataset(Dataset): def __init__(self, data): super().__init__() self.data = data def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class MDataset(): def __init__(self, tokenizer: AutoTokenizer, data_args, training_args): self.train_dataset = ADataset(data_args) self.tokenizer = tokenizer self.data_args = data_args self.train_dataset = self.train_dataset.map( self.process_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on train dataset", ) def process_function(self, examples): sentences = [" ".join(sample[0][3]) for sample in examples] tokenized = self.tokenizer( sentences, max_length=self.max_seq_len, padding=self.padding, truncation=True) ``` But it would raise an ERROR, AttributeError: 'ADataset' object has no attribute 'map'. so how to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset? Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3540/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3539/comments
https://api.github.com/repos/huggingface/datasets/issues/3539/events
https://github.com/huggingface/datasets/pull/3539
1,094,813,242
PR_kwDODunzps4wlXU4
3,539
Research wording for nc licenses
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[ "The CI failure is about some missing tags or sections in the dataset cards, and is unrelated to the part about non commercial use of this PR. Merging" ]
1,641,423,698,000
1,641,495,500,000
1,641,495,499,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3539/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3539", "html_url": "https://github.com/huggingface/datasets/pull/3539", "diff_url": "https://github.com/huggingface/datasets/pull/3539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3539.patch", "merged_at": 1641495499000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3538/comments
https://api.github.com/repos/huggingface/datasets/issues/3538/events
https://github.com/huggingface/datasets/pull/3538
1,094,756,755
PR_kwDODunzps4wlLmD
3,538
Readme usage update
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,417,988,000
1,641,425,665,000
1,641,425,055,000
CONTRIBUTOR
null
Noticing that the recent commit throws a lot of errors in the automatic checks. It looks to me that those errors are simply errors that were already there (metadata issues), unrelated to what I've just changed, but worth another look to make sure.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3538/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3538/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3538", "html_url": "https://github.com/huggingface/datasets/pull/3538", "diff_url": "https://github.com/huggingface/datasets/pull/3538.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3538.patch", "merged_at": 1641425055000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3537/comments
https://api.github.com/repos/huggingface/datasets/issues/3537/events
https://github.com/huggingface/datasets/pull/3537
1,094,738,734
PR_kwDODunzps4wlH1d
3,537
added PII statements and license links to data cards
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,416,361,000
1,641,420,157,000
1,641,420,157,000
CONTRIBUTOR
null
Updates for the following datacards: multilingual_librispeech openslr speech commands superb timit_asr vctk
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3537/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3537", "html_url": "https://github.com/huggingface/datasets/pull/3537", "diff_url": "https://github.com/huggingface/datasets/pull/3537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3537.patch", "merged_at": 1641420157000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3536/comments
https://api.github.com/repos/huggingface/datasets/issues/3536/events
https://github.com/huggingface/datasets/pull/3536
1,094,645,771
PR_kwDODunzps4wk0Yb
3,536
update `pretty_name` for all datasets
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Pushed the lastest changes!" ]
1,641,408,305,000
1,642,028,386,000
1,642,028,385,000
CONTRIBUTOR
null
This PR updates `pretty_name` for all datasets. Previous PR #3498 had done this for only first 200 datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3536/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3536/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3536", "html_url": "https://github.com/huggingface/datasets/pull/3536", "diff_url": "https://github.com/huggingface/datasets/pull/3536.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3536.patch", "merged_at": 1642028385000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3535/comments
https://api.github.com/repos/huggingface/datasets/issues/3535/events
https://github.com/huggingface/datasets/pull/3535
1,094,633,214
PR_kwDODunzps4wkxv0
3,535
Add SVHN dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,407,349,000
1,641,996,875,000
1,641,996,875,000
CONTRIBUTOR
null
Add the SVHN dataset. Additional notes: * compared to the TFDS implementation, exposes additional the "full numbers" config * adds the streaming support for `os.path.splitext` and `scipy.io.loadmat` * adds `h5py` to the requirements list for the dummy data test
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3535/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3535", "html_url": "https://github.com/huggingface/datasets/pull/3535", "diff_url": "https://github.com/huggingface/datasets/pull/3535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3535.patch", "merged_at": 1641996875000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3534/comments
https://api.github.com/repos/huggingface/datasets/issues/3534/events
https://github.com/huggingface/datasets/pull/3534
1,094,352,449
PR_kwDODunzps4wj3LE
3,534
Update wiki_dpr README.md
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,389,384,000
1,645,105,556,000
1,641,392,211,000
MEMBER
null
Some infos of wiki_dpr were missing as noted in https://github.com/huggingface/datasets/issues/3510, I added them and updated the tags and the examples Close #3510.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3534/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3534", "html_url": "https://github.com/huggingface/datasets/pull/3534", "diff_url": "https://github.com/huggingface/datasets/pull/3534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3534.patch", "merged_at": 1641392211000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3533/comments
https://api.github.com/repos/huggingface/datasets/issues/3533/events
https://github.com/huggingface/datasets/issues/3533
1,094,156,147
I_kwDODunzps5BN39z
3,533
Task search function on hub not working correctly
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }, { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon" ]
1,641,375,390,000
1,641,376,988,000
null
MEMBER
null
When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason: - https://huggingface.co/datasets/speech_commands even thought it's task tags seem correct: https://raw.githubusercontent.com/huggingface/datasets/master/datasets/speech_commands/README.md
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3533/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3532/comments
https://api.github.com/repos/huggingface/datasets/issues/3532/events
https://github.com/huggingface/datasets/pull/3532
1,094,035,066
PR_kwDODunzps4wi1ft
3,532
Give clearer instructions to add the YAML tags
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "this is great, maybe just put all of it in one line?\r\n\r\n> TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging" ]
1,641,365,272,000
1,642,434,877,000
1,642,434,876,000
MEMBER
null
Fix #3531. CC: @julien-c @VictorSanh
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3532/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3532/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3532", "html_url": "https://github.com/huggingface/datasets/pull/3532", "diff_url": "https://github.com/huggingface/datasets/pull/3532.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3532.patch", "merged_at": 1642434876000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3531/comments
https://api.github.com/repos/huggingface/datasets/issues/3531/events
https://github.com/huggingface/datasets/issues/3531
1,094,033,280
I_kwDODunzps5BNZ-A
3,531
Give clearer instructions to add the YAML tags
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,365,060,000
1,642,434,876,000
1,642,434,876,000
MEMBER
null
## Describe the bug As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32 Maybe we should give clearer instruction/hints in the README template.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3531/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3530/comments
https://api.github.com/repos/huggingface/datasets/issues/3530/events
https://github.com/huggingface/datasets/pull/3530
1,093,894,732
PR_kwDODunzps4wiZCw
3,530
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,346,327,000
1,641,387,051,000
1,641,387,050,000
CONTRIBUTOR
null
Removing reference to "Common Voice" in Personal and Sensitive Information section. Adding link to license. Correct license type in metadata.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3530/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3530", "html_url": "https://github.com/huggingface/datasets/pull/3530", "diff_url": "https://github.com/huggingface/datasets/pull/3530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3530.patch", "merged_at": 1641387050000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3529
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3529/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3529/comments
https://api.github.com/repos/huggingface/datasets/issues/3529/events
https://github.com/huggingface/datasets/pull/3529
1,093,846,356
PR_kwDODunzps4wiPA9
3,529
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,340,367,000
1,641,387,015,000
1,641,387,014,000
CONTRIBUTOR
null
Updating licensing information & personal and sensitive information.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3529/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3529", "html_url": "https://github.com/huggingface/datasets/pull/3529", "diff_url": "https://github.com/huggingface/datasets/pull/3529.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3529.patch", "merged_at": 1641387014000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3528/comments
https://api.github.com/repos/huggingface/datasets/issues/3528/events
https://github.com/huggingface/datasets/pull/3528
1,093,844,616
PR_kwDODunzps4wiOqH
3,528
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,340,091,000
1,641,386,981,000
1,641,386,980,000
CONTRIBUTOR
null
Updating license with appropriate capitalization & a link. Updating Personal and Sensitive Information to address PII concern.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3528/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3528", "html_url": "https://github.com/huggingface/datasets/pull/3528", "diff_url": "https://github.com/huggingface/datasets/pull/3528.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3528.patch", "merged_at": 1641386980000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3527/comments
https://api.github.com/repos/huggingface/datasets/issues/3527/events
https://github.com/huggingface/datasets/pull/3527
1,093,840,707
PR_kwDODunzps4wiN1w
3,527
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,339,581,000
1,641,342,230,000
1,641,342,230,000
CONTRIBUTOR
null
Adding licensing information.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3527/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3527", "html_url": "https://github.com/huggingface/datasets/pull/3527", "diff_url": "https://github.com/huggingface/datasets/pull/3527.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3527.patch", "merged_at": 1641342230000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3526/comments
https://api.github.com/repos/huggingface/datasets/issues/3526/events
https://github.com/huggingface/datasets/pull/3526
1,093,833,446
PR_kwDODunzps4wiMaQ
3,526
Update README.md
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,338,723,000
1,641,339,008,000
null
CONTRIBUTOR
null
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3526/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3526", "html_url": "https://github.com/huggingface/datasets/pull/3526", "diff_url": "https://github.com/huggingface/datasets/pull/3526.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3526.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3525/comments
https://api.github.com/repos/huggingface/datasets/issues/3525/events
https://github.com/huggingface/datasets/pull/3525
1,093,831,268
PR_kwDODunzps4wiL8p
3,525
Adding license information for Openbookcorpus
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[ "The MIT license seems to be for the crawling code, no ? Then maybe we can also redirect users to the [terms of smashwords.com](https://www.smashwords.com/about/tos) regarding copyrights, in particular the paragraph 10 for end-users. In particular it seems that end users can download and use the content \"for their personal enjoyment in any reasonable non-commercial manner in compliance with copyright law\" and the smashwords end-users agreement.\r\n\r\nIt should be the same for https://github.com/huggingface/datasets/pull/3526 as well" ]
1,641,338,436,000
1,646,989,693,000
null
CONTRIBUTOR
null
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3525/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3525", "html_url": "https://github.com/huggingface/datasets/pull/3525", "diff_url": "https://github.com/huggingface/datasets/pull/3525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3525.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3524/comments
https://api.github.com/repos/huggingface/datasets/issues/3524/events
https://github.com/huggingface/datasets/pull/3524
1,093,826,723
PR_kwDODunzps4wiK_v
3,524
Adding link to license.
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,337,908,000
1,641,385,898,000
1,641,385,897,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3524/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3524", "html_url": "https://github.com/huggingface/datasets/pull/3524", "diff_url": "https://github.com/huggingface/datasets/pull/3524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3524.patch", "merged_at": 1641385897000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3523/comments
https://api.github.com/repos/huggingface/datasets/issues/3523/events
https://github.com/huggingface/datasets/pull/3523
1,093,819,227
PR_kwDODunzps4wiJc2
3,523
Added links to licensing and PII message in vctk dataset
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,337,018,000
1,641,497,630,000
1,641,497,630,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3523/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3523", "html_url": "https://github.com/huggingface/datasets/pull/3523", "diff_url": "https://github.com/huggingface/datasets/pull/3523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3523.patch", "merged_at": 1641497630000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3522/comments
https://api.github.com/repos/huggingface/datasets/issues/3522/events
https://github.com/huggingface/datasets/issues/3522
1,093,807,586
I_kwDODunzps5BMi3i
3,522
wmt19 is broken (zh-en)
{ "login": "AjayP13", "id": 5404177, "node_id": "MDQ6VXNlcjU0MDQxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AjayP13", "html_url": "https://github.com/AjayP13", "followers_url": "https://api.github.com/users/AjayP13/followers", "following_url": "https://api.github.com/users/AjayP13/following{/other_user}", "gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}", "starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions", "organizations_url": "https://api.github.com/users/AjayP13/orgs", "repos_url": "https://api.github.com/users/AjayP13/repos", "events_url": "https://api.github.com/users/AjayP13/events{/privacy}", "received_events_url": "https://api.github.com/users/AjayP13/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[]
1,641,335,625,000
1,642,425,415,000
null
NONE
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wmt19", 'zh-en') ``` ## Expected results The dataset should download. ## Actual results `ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip` ## Environment info - `datasets` version: 1.15.1 - Platform: Linux - Python version: 3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3522/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3521/comments
https://api.github.com/repos/huggingface/datasets/issues/3521/events
https://github.com/huggingface/datasets/pull/3521
1,093,797,947
PR_kwDODunzps4wiFCs
3,521
Vivos license update
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,334,667,000
1,641,334,696,000
1,641,334,696,000
CONTRIBUTOR
null
Updated the license information with the link to the license text
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3521/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3521", "html_url": "https://github.com/huggingface/datasets/pull/3521", "diff_url": "https://github.com/huggingface/datasets/pull/3521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3521.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3520/comments
https://api.github.com/repos/huggingface/datasets/issues/3520/events
https://github.com/huggingface/datasets/pull/3520
1,093,747,753
PR_kwDODunzps4wh6oD
3,520
Audio datacard update - first pass
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false }
[ { "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm not sure that we want to change the tags at the top of the cards by hand. Those are used to create the tags in the hub. Although looking at all the tags now, we might want to normalize the current tags again (hyphens or no, \".0\" or no). Maybe we could add a binary tag for public domain or not?", "> \r\n\r\nThat's a good point, I didn't realize these were auto-populated.\r\nAt the same time, some of them are wrong -- how/where are they auto-populated? Seems like we should fix it at that source for the future.\r\nIn the mean time, I see that \"cc0-1.0\" is the desired tag for public domain, so I will change that for now." ]
1,641,329,905,000
1,641,385,821,000
1,641,385,820,000
CONTRIBUTOR
null
Filling out data card "Personal and Sensitive Information" for speech datasets to note PII concerns
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3520/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3520", "html_url": "https://github.com/huggingface/datasets/pull/3520", "diff_url": "https://github.com/huggingface/datasets/pull/3520.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3520.patch", "merged_at": 1641385820000 }
true