Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    DatasetGenerationError
Message:      An error occurred while generating the dataset
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1964, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: int64, updated_at: int64, due_on: int64, closed_at: int64> to null
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Open a discussion for direct support.

url
string
repository_url
string
labels_url
string
comments_url
string
events_url
string
html_url
string
id
int64
node_id
string
number
int64
title
string
user
dict
labels
list
state
string
locked
bool
assignee
dict
assignees
list
milestone
null
comments
sequence
created_at
int64
updated_at
int64
closed_at
int64
author_association
string
active_lock_reason
null
body
string
reactions
dict
timeline_url
string
performed_via_github_app
null
state_reason
null
draft
bool
pull_request
dict
is_pull_request
bool
https://api.github.com/repos/huggingface/datasets/issues/4989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4989/comments
https://api.github.com/repos/huggingface/datasets/issues/4989/events
https://github.com/huggingface/datasets/issues/4989
1,376,832,233
I_kwDODunzps5SEMrp
4,989
Running add_column() seems to corrupt existing sequence-type column info
{ "login": "derek-rocheleau", "id": 93728165, "node_id": "U_kgDOBZYtpQ", "avatar_url": "https://avatars.githubusercontent.com/u/93728165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/derek-rocheleau", "html_url": "https://github.com/derek-rocheleau", "followers_url": "https://api.github.com/users/derek-rocheleau/followers", "following_url": "https://api.github.com/users/derek-rocheleau/following{/other_user}", "gists_url": "https://api.github.com/users/derek-rocheleau/gists{/gist_id}", "starred_url": "https://api.github.com/users/derek-rocheleau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/derek-rocheleau/subscriptions", "organizations_url": "https://api.github.com/users/derek-rocheleau/orgs", "repos_url": "https://api.github.com/users/derek-rocheleau/repos", "events_url": "https://api.github.com/users/derek-rocheleau/events{/privacy}", "received_events_url": "https://api.github.com/users/derek-rocheleau/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,663,436,525,000
1,663,436,525,000
null
NONE
null
I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like: ds = load_dataset(...) df = ds.to_pandas() df: foo_0 | foo_1 | foo_2 | foo_3 0.0 | 1.0 | 2.0 | 3.0 If I run .add_column("new_col", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be: ds = load_dataset(...) new_ds = ds.add_column("new_col", data) df = new_ds.to_pandas() df: foo | new_col [0.0, 1.0, 2.0, 3.0] | new_val I've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4989/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4988/comments
https://api.github.com/repos/huggingface/datasets/issues/4988/events
https://github.com/huggingface/datasets/issues/4988
1,376,096,584
I_kwDODunzps5SBZFI
4,988
Add `IterableDataset.from_generator` to the API
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
{ "login": "hamid-vakilzadeh", "id": 56002455, "node_id": "MDQ6VXNlcjU2MDAyNDU1", "avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hamid-vakilzadeh", "html_url": "https://github.com/hamid-vakilzadeh", "followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers", "following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}", "gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions", "organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs", "repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos", "events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}", "received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events", "type": "User", "site_admin": false }
[ { "login": "hamid-vakilzadeh", "id": 56002455, "node_id": "MDQ6VXNlcjU2MDAyNDU1", "avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hamid-vakilzadeh", "html_url": "https://github.com/hamid-vakilzadeh", "followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers", "following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}", "gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}", "starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions", "organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs", "repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos", "events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}", "received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events", "type": "User", "site_admin": false } ]
null
[ "#take" ]
1,663,341,581,000
1,663,434,419,000
null
CONTRIBUTOR
null
We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator. cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4988/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4987/comments
https://api.github.com/repos/huggingface/datasets/issues/4987/events
https://github.com/huggingface/datasets/pull/4987
1,376,006,477
PR_kwDODunzps4_GlIu
4,987
Embed image/audio data in dl_and_prepare parquet
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,663,337,367,000
1,663,345,487,000
1,663,345,355,000
MEMBER
null
Embed the bytes of the image or audio files in the Parquet files directly, instead of having a "path" that points to a local file. Indeed Parquet files are often used to share data or to be used by workers that may not have access to the local files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4987/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4987", "html_url": "https://github.com/huggingface/datasets/pull/4987", "diff_url": "https://github.com/huggingface/datasets/pull/4987.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4987.patch", "merged_at": 1663345355000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4986/comments
https://api.github.com/repos/huggingface/datasets/issues/4986/events
https://github.com/huggingface/datasets/pull/4986
1,375,895,035
PR_kwDODunzps4_GNSd
4,986
[doc] Fix broken snippet that had too many quotes
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in `datasets`, hah!\r\n\r\nAs for this PR, the issue seems solved according to the [new PR documentation](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4986/en/process#map):\r\n![image](https://user-images.githubusercontent.com/37621491/190646405-6afa06fa-9eac-48f6-ab30-2677944fb7b6.png)\r\n" ]
1,663,332,067,000
1,663,366,341,000
1,663,349,534,000
CONTRIBUTOR
null
Hello! ### Pull request overview * Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes ### Details The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map This screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly: ![image](https://user-images.githubusercontent.com/37621491/190640627-f7587362-0e44-4464-a5d1-a0b98df6986f.png) The change speaks for itself. Thank you for the detailed documentation, by the way. - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4986/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4986", "html_url": "https://github.com/huggingface/datasets/pull/4986", "diff_url": "https://github.com/huggingface/datasets/pull/4986.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4986.patch", "merged_at": 1663349534000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4985/comments
https://api.github.com/repos/huggingface/datasets/issues/4985/events
https://github.com/huggingface/datasets/pull/4985
1,375,807,768
PR_kwDODunzps4_F6kU
4,985
[WIP] Prefer split patterns from directories over split patterns from filenames
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4985). All of your documentation changes will be reflected on that endpoint." ]
1,663,327,240,000
1,663,334,541,000
null
CONTRIBUTOR
null
related to https://github.com/huggingface/datasets/issues/4895 todo: - [ ] test
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4985/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4985", "html_url": "https://github.com/huggingface/datasets/pull/4985", "diff_url": "https://github.com/huggingface/datasets/pull/4985.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4985.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4984/comments
https://api.github.com/repos/huggingface/datasets/issues/4984/events
https://github.com/huggingface/datasets/pull/4984
1,375,690,330
PR_kwDODunzps4_FhTm
4,984
docs: ✏️ add links to the Datasets API
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https://github.com/huggingface/datasets-server/issues/568" ]
1,663,320,852,000
1,663,333,814,000
1,663,333,653,000
CONTRIBUTOR
null
I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs. I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4984/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4984", "html_url": "https://github.com/huggingface/datasets/pull/4984", "diff_url": "https://github.com/huggingface/datasets/pull/4984.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4984.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4983/comments
https://api.github.com/repos/huggingface/datasets/issues/4983/events
https://github.com/huggingface/datasets/issues/4983
1,375,667,654
I_kwDODunzps5R_wXG
4,983
How to convert torch.utils.data.Dataset to huggingface dataset?
{ "login": "DEROOCE", "id": 77595952, "node_id": "MDQ6VXNlcjc3NTk1OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/77595952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DEROOCE", "html_url": "https://github.com/DEROOCE", "followers_url": "https://api.github.com/users/DEROOCE/followers", "following_url": "https://api.github.com/users/DEROOCE/following{/other_user}", "gists_url": "https://api.github.com/users/DEROOCE/gists{/gist_id}", "starred_url": "https://api.github.com/users/DEROOCE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DEROOCE/subscriptions", "organizations_url": "https://api.github.com/users/DEROOCE/orgs", "repos_url": "https://api.github.com/users/DEROOCE/repos", "events_url": "https://api.github.com/users/DEROOCE/events{/privacy}", "received_events_url": "https://api.github.com/users/DEROOCE/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:\r\n # yield ex\r\n\r\ndset = Dataset.from_generator(gen)\r\n```" ]
1,663,319,710,000
1,663,342,106,000
null
NONE
null
I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below: ```python from datasets import Dataset data = [[1, 2],[3, 4]] ds = Dataset.from_dict({"data": data}) ds = ds.with_format("torch") ds[0] ds[:2] ``` So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4983/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4982/comments
https://api.github.com/repos/huggingface/datasets/issues/4982/events
https://github.com/huggingface/datasets/issues/4982
1,375,604,693
I_kwDODunzps5R_g_V
4,982
Create dataset_infos.json with VALIDATION and TEST splits
{ "login": "skalinin", "id": 26695348, "node_id": "MDQ6VXNlcjI2Njk1MzQ4", "avatar_url": "https://avatars.githubusercontent.com/u/26695348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skalinin", "html_url": "https://github.com/skalinin", "followers_url": "https://api.github.com/users/skalinin/followers", "following_url": "https://api.github.com/users/skalinin/following{/other_user}", "gists_url": "https://api.github.com/users/skalinin/gists{/gist_id}", "starred_url": "https://api.github.com/users/skalinin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skalinin/subscriptions", "organizations_url": "https://api.github.com/users/skalinin/orgs", "repos_url": "https://api.github.com/users/skalinin/repos", "events_url": "https://api.github.com/users/skalinin/events{/privacy}", "received_events_url": "https://api.github.com/users/skalinin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,663,316,479,000
1,663,323,163,000
null
NONE
null
The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569). > When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error: > ValueError: Unknown split "test". Should be one of ['train']. > > The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN > > You can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch) I tried to clear the cache folder, than I got an another error. I run: ``` rm -r ~/.cache/huggingface datasets-cli test Peter.py --save_infos --all_configs ``` The error message: ``` Using custom data configuration default Testing builder 'default' (1/1) Downloading and preparing dataset peter/default to /Users/kalinin/.cache/huggingface/datasets/peter/default/0.0.0/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:00<00:00, 5160.63it/s] Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/usr/local/lib/python3.9/site-packages/datasets/commands/test.py", line 137, in run builder.download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/kalinin/.cache/huggingface/modules/datasets_modules/datasets/Peter/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d/Peter.py", line 23, in _split_generators data_files = dl_manager.download_and_extract(_URLS) File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 403, in extract extracted_paths = map_nested( File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested mapped = [ File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 213, in cached_path output_path = ExtractManager(cache_dir=download_config.cache_dir).extract( File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 46, in extract self.extractor.extract(input_path, output_path, extractor_format) File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 263, in extract with FileLock(lock_path): File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 399, in __init__ max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax FileNotFoundError: [Errno 2] No such file or directory: '' Exception ignored in: <function BaseFileLock.__del__ at 0x11caeec10> Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__ self.release(force=True) File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 303, in release with self._thread_lock: AttributeError: 'UnixFileLock' object has no attribute '_thread_lock' Extracting data files: 0%| | 0/4 [00:00<?, ?it/s] ``` Can you help me please? ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 9.0.0 - Pandas version: 1.2.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4982/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4981/comments
https://api.github.com/repos/huggingface/datasets/issues/4981/events
https://github.com/huggingface/datasets/issues/4981
1,375,086,773
I_kwDODunzps5R9ii1
4,981
Can't create a dataset with `float16` features
{ "login": "dconathan", "id": 15098095, "node_id": "MDQ6VXNlcjE1MDk4MDk1", "avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dconathan", "html_url": "https://github.com/dconathan", "followers_url": "https://api.github.com/users/dconathan/followers", "following_url": "https://api.github.com/users/dconathan/following{/other_user}", "gists_url": "https://api.github.com/users/dconathan/gists{/gist_id}", "starred_url": "https://api.github.com/users/dconathan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dconathan/subscriptions", "organizations_url": "https://api.github.com/users/dconathan/orgs", "repos_url": "https://api.github.com/users/dconathan/repos", "events_url": "https://api.github.com/users/dconathan/events{/privacy}", "received_events_url": "https://api.github.com/users/dconathan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types", "Thanks for the link…. didn’t realize arrow didn’t support it yet. Should it be removed from https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_classes#datasets.Value until Arrow supports it?", "Yes, you are right: maybe we should either remove it from our docs or add a comment explaining the issue.\r\n\r\nThe thing is that in Arrow it is partially supported: you can create `float16` values, but you can't cast them from/to other types. And current implementation of `Value` always tries to perform a cast from `float64` to `float16`." ]
1,663,275,804,000
1,663,322,231,000
null
NONE
null
## Describe the bug I can't create a dataset with `float16` features. I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error. The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases? Thanks! ## Steps to reproduce the bug All of the following raise the following error with the same exact (as far as I can tell) traceback: ```python ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ```python from datasets import Dataset, Features, Value Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16"))) import numpy as np Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16"))) import torch Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16"))) ``` ## Expected results A dataset with `float16` features is successfully created. ## Actual results ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) Cell In [14], line 1 ----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16"))) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split) 865 mapping = features.encode_batch(mapping) 866 mapping = { 867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col) 868 for col, data in mapping.items() 869 } --> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping) 871 if info.features is None: 872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()}) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs) 734 @classmethod 735 def from_pydict(cls, *args, **kwargs): 736 """ 737 Construct a Table from Arrow arrays or columns 738 (...) 748 :class:`datasets.table.Table`: 749 """ --> 750 return cls(pa.Table.from_pydict(*args, **kwargs)) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type) 192 # otherwise we can finally use the user's type 193 elif type is not None: 194 # We use cast_array_to_feature to support casting to custom types like Audio and Image 195 # Also, when trying type "string", we don't want to convert integers or floats to "string". 196 # We only do it if trying_type is False - since this is what the user asks for. --> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 198 return out 199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str) 1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str) 1852 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) 1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str) 1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type): 1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") -> 1762 return array.cast(pa_type) 1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options) 387 else: 388 options = CastOptions.safe(target_type) --> 389 return call_function("cast", [arr], options) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4981/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4980/comments
https://api.github.com/repos/huggingface/datasets/issues/4980/events
https://github.com/huggingface/datasets/issues/4980
1,374,868,083
I_kwDODunzps5R8tJz
4,980
Make `pyarrow` optional
{ "login": "KOLANICH", "id": 240344, "node_id": "MDQ6VXNlcjI0MDM0NA==", "avatar_url": "https://avatars.githubusercontent.com/u/240344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KOLANICH", "html_url": "https://github.com/KOLANICH", "followers_url": "https://api.github.com/users/KOLANICH/followers", "following_url": "https://api.github.com/users/KOLANICH/following{/other_user}", "gists_url": "https://api.github.com/users/KOLANICH/gists{/gist_id}", "starred_url": "https://api.github.com/users/KOLANICH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KOLANICH/subscriptions", "organizations_url": "https://api.github.com/users/KOLANICH/orgs", "repos_url": "https://api.github.com/users/KOLANICH/repos", "events_url": "https://api.github.com/users/KOLANICH/events{/privacy}", "received_events_url": "https://api.github.com/users/KOLANICH/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https://github.com/huggingface/datasets/blob/51aef08ad7053c0bfe8f9a961207b26df15850d3/src/datasets/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rewrite / a different library with minimal functionality (datasets-lite ?)", "Thanks for the proposal, @KOLANICH. And also thanks for your answer, @dconathan.\r\n\r\nIndeed, we are using `pyarrow` as the backend for our datasets, in order to cache them and also allow memory-mapping (using datasets larger than your RAM memory).\r\n\r\nOne way to avoid using `pyarrow` could be loading the datasets in streaming mode, by passing `streaming=True` to `load_dataset`. This way you basically get a generator for the dataset; nothing is downloaded, nor cached. ", "Thanks for the info. Could `datasets` then be made optional for `transformers` instead? I used `transformers` only to deal with pretrained models to deploy them (convert to ONNX, and then I use TVM), so I don't really need `pyarrow` and `datasets` by now.\r\n" ]
1,663,263,483,000
1,663,349,027,000
1,663,349,027,000
NONE
null
**Is your feature request related to a problem? Please describe.** Is `pyarrow` really needed for every dataset? **Describe the solution you'd like** It is made optional. **Describe alternatives you've considered** Likely, no.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4980/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4979/comments
https://api.github.com/repos/huggingface/datasets/issues/4979/events
https://github.com/huggingface/datasets/pull/4979
1,374,820,758
PR_kwDODunzps4_CouM
4,979
Fix missing tags in dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,663,260,663,000
1,663,262,062,000
1,663,261,929,000
MEMBER
null
Fix missing tags in dataset cards. This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921 - #4931
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4979/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4979", "html_url": "https://github.com/huggingface/datasets/pull/4979", "diff_url": "https://github.com/huggingface/datasets/pull/4979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4979.patch", "merged_at": 1663261929000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4978/comments
https://api.github.com/repos/huggingface/datasets/issues/4978/events
https://github.com/huggingface/datasets/pull/4978
1,374,271,504
PR_kwDODunzps4_Axnh
4,978
Update IndicGLUE download links
{ "login": "sumanthd17", "id": 28291870, "node_id": "MDQ6VXNlcjI4MjkxODcw", "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sumanthd17", "html_url": "https://github.com/sumanthd17", "followers_url": "https://api.github.com/users/sumanthd17/followers", "following_url": "https://api.github.com/users/sumanthd17/following{/other_user}", "gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}", "starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions", "organizations_url": "https://api.github.com/users/sumanthd17/orgs", "repos_url": "https://api.github.com/users/sumanthd17/repos", "events_url": "https://api.github.com/users/sumanthd17/events{/privacy}", "received_events_url": "https://api.github.com/users/sumanthd17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,663,236,357,000
1,663,279,220,000
1,663,279,054,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4978/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4978", "html_url": "https://github.com/huggingface/datasets/pull/4978", "diff_url": "https://github.com/huggingface/datasets/pull/4978.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4978.patch", "merged_at": 1663279054000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4977/comments
https://api.github.com/repos/huggingface/datasets/issues/4977/events
https://github.com/huggingface/datasets/issues/4977
1,372,962,157
I_kwDODunzps5R1b1t
4,977
Providing dataset size
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in the middle of removing those JSON files and putting their information directly in the header of the `README.md` (as YAML tags). Normally, the CLI command should continue working but saving its output to the dataset card instead. See:\r\n- #4926", "Additionally, the download size can be inferred by doing HEAD requests to the files to be downloaded. And for files hosted on the hub you can even get the file sizes using the Hub API", "Amazing @albertvillanova ! I think just having that information visible in the dataset info (without having to do any requests/additional coding) would be really useful :hugs: " ]
1,663,160,967,000
1,663,257,838,000
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded). **Describe the solution you'd like** Auto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some). **Describe alternatives you've considered** People should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face: **Additional context** Mentioned to @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4977/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4976/comments
https://api.github.com/repos/huggingface/datasets/issues/4976/events
https://github.com/huggingface/datasets/issues/4976
1,372,322,382
I_kwDODunzps5Ry_pO
4,976
Hope to adapt Python3.9 as soon as possible
{ "login": "RedHeartSecretMan", "id": 74012141, "node_id": "MDQ6VXNlcjc0MDEyMTQx", "avatar_url": "https://avatars.githubusercontent.com/u/74012141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RedHeartSecretMan", "html_url": "https://github.com/RedHeartSecretMan", "followers_url": "https://api.github.com/users/RedHeartSecretMan/followers", "following_url": "https://api.github.com/users/RedHeartSecretMan/following{/other_user}", "gists_url": "https://api.github.com/users/RedHeartSecretMan/gists{/gist_id}", "starred_url": "https://api.github.com/users/RedHeartSecretMan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RedHeartSecretMan/subscriptions", "organizations_url": "https://api.github.com/users/RedHeartSecretMan/orgs", "repos_url": "https://api.github.com/users/RedHeartSecretMan/repos", "events_url": "https://api.github.com/users/RedHeartSecretMan/events{/privacy}", "received_events_url": "https://api.github.com/users/RedHeartSecretMan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?", "There is this related issue already: https://github.com/huggingface/datasets/issues/4113\r\nAnd I guess we need a CI job for 3.9 ^^" ]
1,663,130,542,000
1,663,256,697,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context about the feature request here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4976/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4975
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4975/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4975/comments
https://api.github.com/repos/huggingface/datasets/issues/4975/events
https://github.com/huggingface/datasets/pull/4975
1,371,703,691
PR_kwDODunzps4-4NXX
4,975
Add `fn_kwargs` param to `IterableDataset.map`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,663,085,945,000
1,663,087,667,000
1,663,087,534,000
CONTRIBUTOR
null
Add the `fn_kwargs` parameter to `IterableDataset.map`. ("Resolves" https://discuss.huggingface.co/t/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free/22780/3)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4975/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4975", "html_url": "https://github.com/huggingface/datasets/pull/4975", "diff_url": "https://github.com/huggingface/datasets/pull/4975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4975.patch", "merged_at": 1663087534000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4974/comments
https://api.github.com/repos/huggingface/datasets/issues/4974/events
https://github.com/huggingface/datasets/pull/4974
1,371,682,020
PR_kwDODunzps4-4Iri
4,974
[GH->HF] Part 2: Remove all dataset scripts from github
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4974). All of your documentation changes will be reflected on that endpoint." ]
1,663,084,872,000
1,663,333,509,000
null
MEMBER
null
Now that all the datasets live on the Hub we can remove the /datasets directory that contains all the dataset scripts of this repository Needs https://github.com/huggingface/datasets/pull/4973 to be merged first and PR to be enabled on the Hub for non-namespaced datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4974/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4974", "html_url": "https://github.com/huggingface/datasets/pull/4974", "diff_url": "https://github.com/huggingface/datasets/pull/4974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4974.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4973/comments
https://api.github.com/repos/huggingface/datasets/issues/4973/events
https://github.com/huggingface/datasets/pull/4973
1,371,600,074
PR_kwDODunzps4-33JW
4,973
[GH->HF] Load datasets from the Hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Duplicate of:\r\n- #4059" ]
1,663,081,301,000
1,663,255,611,000
1,663,255,466,000
MEMBER
null
Currently datasets with no namespace (e.g. squad, glue) are loaded from github. In this PR I changed this logic to use the Hugging Face Hub instead. This is the first step in removing all the dataset scripts in this repository related to discussions in https://github.com/huggingface/datasets/pull/4059 (I should have continued from this PR actually)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4973/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4973/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4973", "html_url": "https://github.com/huggingface/datasets/pull/4973", "diff_url": "https://github.com/huggingface/datasets/pull/4973.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4973.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4972/comments
https://api.github.com/repos/huggingface/datasets/issues/4972/events
https://github.com/huggingface/datasets/pull/4972
1,371,443,306
PR_kwDODunzps4-3VVF
4,972
Fix map batched with torch output
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4972). All of your documentation changes will be reflected on that endpoint." ]
1,663,074,994,000
1,663,256,568,000
null
MEMBER
null
Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2 Currently it fails if one uses batched `map` and the map function returns a torch tensor. I fixed it for torch, tf, jax and pandas series.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4972/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4972", "html_url": "https://github.com/huggingface/datasets/pull/4972", "diff_url": "https://github.com/huggingface/datasets/pull/4972.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4972.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4971/comments
https://api.github.com/repos/huggingface/datasets/issues/4971/events
https://github.com/huggingface/datasets/pull/4971
1,370,319,516
PR_kwDODunzps4-zk3g
4,971
Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,663,006,104,000
1,663,077,068,000
1,663,076,925,000
CONTRIBUTOR
null
Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform. This makes the behavior inconsistent with `IterableDataset.map`. (It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246) Fix https://github.com/huggingface/datasets/issues/4858
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4971/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4971", "html_url": "https://github.com/huggingface/datasets/pull/4971", "diff_url": "https://github.com/huggingface/datasets/pull/4971.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4971.patch", "merged_at": 1663076924000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4970/comments
https://api.github.com/repos/huggingface/datasets/issues/4970/events
https://github.com/huggingface/datasets/pull/4970
1,369,433,074
PR_kwDODunzps4-wkY2
4,970
Support streaming nli_tr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,968,925,000
1,662,972,304,000
1,662,972,188,000
MEMBER
null
Support streaming nli_tr dataset. This PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding. Fix #3186.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4970/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4970", "html_url": "https://github.com/huggingface/datasets/pull/4970", "diff_url": "https://github.com/huggingface/datasets/pull/4970.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4970.patch", "merged_at": 1662972188000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4969/comments
https://api.github.com/repos/huggingface/datasets/issues/4969/events
https://github.com/huggingface/datasets/pull/4969
1,369,334,740
PR_kwDODunzps4-wPOk
4,969
Fix data URL and metadata of vivos dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,963,154,000
1,662,966,975,000
1,662,966,859,000
MEMBER
null
After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130 This PR updates their data URL and some metadata (homepage, citation and license). Fix #4936.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4969/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4969", "html_url": "https://github.com/huggingface/datasets/pull/4969", "diff_url": "https://github.com/huggingface/datasets/pull/4969.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4969.patch", "merged_at": 1662966859000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4968/comments
https://api.github.com/repos/huggingface/datasets/issues/4968/events
https://github.com/huggingface/datasets/pull/4968
1,369,312,877
PR_kwDODunzps4-wKkw
4,968
Support streaming compguesswhat dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,961,344,000
1,662,969,606,000
1,662,969,486,000
MEMBER
null
Support streaming `compguesswhat` dataset. Fix #3191.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4968/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4968", "html_url": "https://github.com/huggingface/datasets/pull/4968", "diff_url": "https://github.com/huggingface/datasets/pull/4968.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4968.patch", "merged_at": 1662969486000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4967/comments
https://api.github.com/repos/huggingface/datasets/issues/4967/events
https://github.com/huggingface/datasets/pull/4967
1,369,092,452
PR_kwDODunzps4-vbS-
4,967
Strip "/" in local dataset path to avoid empty dataset name error
{ "login": "apohllo", "id": 40543, "node_id": "MDQ6VXNlcjQwNTQz", "avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apohllo", "html_url": "https://github.com/apohllo", "followers_url": "https://api.github.com/users/apohllo/followers", "following_url": "https://api.github.com/users/apohllo/following{/other_user}", "gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}", "starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apohllo/subscriptions", "organizations_url": "https://api.github.com/users/apohllo/orgs", "repos_url": "https://api.github.com/users/apohllo/repos", "events_url": "https://api.github.com/users/apohllo/events{/privacy}", "received_events_url": "https://api.github.com/users/apohllo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,937,756,000
1,662,996,778,000
1,662,996,638,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4967/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4967", "html_url": "https://github.com/huggingface/datasets/pull/4967", "diff_url": "https://github.com/huggingface/datasets/pull/4967.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4967.patch", "merged_at": 1662996638000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4965/comments
https://api.github.com/repos/huggingface/datasets/issues/4965/events
https://github.com/huggingface/datasets/issues/4965
1,368,661,002
I_kwDODunzps5RlBwK
4,965
[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()
{ "login": "hoangtnm", "id": 35718590, "node_id": "MDQ6VXNlcjM1NzE4NTkw", "avatar_url": "https://avatars.githubusercontent.com/u/35718590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hoangtnm", "html_url": "https://github.com/hoangtnm", "followers_url": "https://api.github.com/users/hoangtnm/followers", "following_url": "https://api.github.com/users/hoangtnm/following{/other_user}", "gists_url": "https://api.github.com/users/hoangtnm/gists{/gist_id}", "starred_url": "https://api.github.com/users/hoangtnm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hoangtnm/subscriptions", "organizations_url": "https://api.github.com/users/hoangtnm/orgs", "repos_url": "https://api.github.com/users/hoangtnm/repos", "events_url": "https://api.github.com/users/hoangtnm/events{/privacy}", "received_events_url": "https://api.github.com/users/hoangtnm/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.", "Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?" ]
1,662,825,349,000
1,663,426,281,000
null
NONE
null
## Describe the bug I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work. ## Steps to reproduce the bug ```python import datasets dataset = load_dataset("csv", data_files="./train.csv")["train"] dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])}) dataset = dataset.cast_column("audio", Audio()) dataset[0] ``` ## Expected results ``` {'audio': {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'}, 'english_transcription': 'I would like to set up a joint account with my partner', 'intent_class': 11, 'lang_id': 4, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'transcription': 'I would like to set up a joint account with my partner'} ``` ## Actual results ````--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) Input In [6], in <cell line: 1>() ----> 1 dataset[0] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key) 2163 def __getitem__(self, key): # noqa: F811 2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2165 return self._getitem( 2166 key, 2167 ) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs) 2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2150 formatted_output = format_table( 2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2152 ) 2153 return formatted_output File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row) 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ -> 1647 return { 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ 1647 return { -> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id) 1257 # Object with special decoding: 1258 elif isinstance(schema, (Audio, Image)): 1259 # we pass the token to read and decode files from private repositories in streaming mode -> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None 1261 return obj File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id) 154 array, sampling_rate = self._decode_non_mp3_file_like(file) 155 else: --> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id) 157 return {"path": path, "array": array, "sampling_rate": sampling_rate} File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id) 254 use_auth_token = None 256 with xopen(path, "rb", use_auth_token=use_auth_token) as f: --> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 258 return array, sampling_rate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs) 86 extra_args = len(args) - len(all_args) 87 if extra_args <= 0: ---> 88 return f(*args, **kwargs) 90 # extra_args > 0 91 args_msg = [ 92 "{}={}".format(name, arg) 93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:]) 94 ] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type) 161 else: 162 # Otherwise try soundfile first, and then fall back if necessary 163 try: --> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype) 166 except RuntimeError as exc: 167 # If soundfile failed, try audioread instead 168 if isinstance(path, (str, pathlib.PurePath)): File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype) 192 context = path 193 else: 194 # Otherwise, create the soundfile object --> 195 context = sf.SoundFile(path) 197 with context as sf_desc: 198 sr_native = sf_desc.samplerate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 626 self._mode = mode 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) --> 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) 632 self.seek(0) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd) 1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd) 1178 elif _has_virtual_io_attrs(file, mode_int): -> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file), 1180 mode_int, self._info, _ffi.NULL) 1181 else: 1182 raise TypeError("Invalid file: {0!r}".format(self.name)) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file) 1194 def _init_virtual_io(self, file): 1195 """Initialize callback functions for sf_open_virtual().""" 1196 @_ffi.callback("sf_vio_get_filelen") -> 1197 def vio_get_filelen(user_data): 1198 curr = file.tell() 1199 file.seek(0, SEEK_END) MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4965/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4964/comments
https://api.github.com/repos/huggingface/datasets/issues/4964/events
https://github.com/huggingface/datasets/issues/4964
1,368,617,322
I_kwDODunzps5Rk3Fq
4,964
Column of arrays (2D+) are using unreasonably high memory
{ "login": "vigsterkr", "id": 30353, "node_id": "MDQ6VXNlcjMwMzUz", "avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vigsterkr", "html_url": "https://github.com/vigsterkr", "followers_url": "https://api.github.com/users/vigsterkr/followers", "following_url": "https://api.github.com/users/vigsterkr/following{/other_user}", "gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}", "starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions", "organizations_url": "https://api.github.com/users/vigsterkr/orgs", "repos_url": "https://api.github.com/users/vigsterkr/repos", "events_url": "https://api.github.com/users/vigsterkr/events{/privacy}", "received_events_url": "https://api.github.com/users/vigsterkr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above." ]
1,662,815,242,000
1,662,815,297,000
null
NONE
null
## Describe the bug When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage. ## Steps to reproduce the bug ```python from datasets import Dataset, Features, Array2D, Array3D import numpy as np column_name = "a" array_shape = (64, 64, 3) data = np.random.random((10000,) + array_shape) dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")})) ``` the code above will use about 10Gb of RAM while constructing the `dataset` object. The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column. ```python from datasets import Dataset import numpy as np column_name = "a" array_shape = (64, 64, 3) data = np.random.random((10000,) + array_shape) dataset = Dataset.from_dict({column_name: data}) dataset[column_name] ``` ## Expected results Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening. ## Actual results Enormous memory- and runtime overhead. ## Environment info - `datasets` version: 2.3.2 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4964/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4964/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4963/comments
https://api.github.com/repos/huggingface/datasets/issues/4963/events
https://github.com/huggingface/datasets/issues/4963
1,368,201,188
I_kwDODunzps5RjRfk
4,963
Dataset without script does not support regular JSON data file
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. " ]
1,662,749,133,000
1,662,971,727,000
null
MEMBER
null
### Link https://huggingface.co/datasets/julien-c/label-studio-my-dogs ### Description <img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png"> ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4963/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4962/comments
https://api.github.com/repos/huggingface/datasets/issues/4962/events
https://github.com/huggingface/datasets/pull/4962
1,368,155,365
PR_kwDODunzps4-sh-o
4,962
Update setup.py
{ "login": "DCNemesis", "id": 3616964, "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DCNemesis", "html_url": "https://github.com/DCNemesis", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "repos_url": "https://api.github.com/users/DCNemesis/repos", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Before addressing this PR, we should be sure about the issue. See my comment in:\r\n- https://github.com/huggingface/datasets/issues/4961#issuecomment-1243376247", "Once we know 2022.8.2 works, I'm closing this PR, as the corresponding issue." ]
1,662,746,276,000
1,662,993,184,000
1,662,993,184,000
NONE
null
exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4962/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4962", "html_url": "https://github.com/huggingface/datasets/pull/4962", "diff_url": "https://github.com/huggingface/datasets/pull/4962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4962.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4961/comments
https://api.github.com/repos/huggingface/datasets/issues/4961/events
https://github.com/huggingface/datasets/issues/4961
1,368,124,033
I_kwDODunzps5Ri-qB
4,961
fsspec 2022.8.2 breaks xopen in streaming mode
{ "login": "DCNemesis", "id": 3616964, "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DCNemesis", "html_url": "https://github.com/DCNemesis", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "repos_url": "https://api.github.com/users/DCNemesis/repos", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.", "Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.", "Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https://github.com/huggingface/transformers/pull/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n", "@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https://github.com/googlecolab/colabtools/issues/3055) on their end too. ", "Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.", "Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https://github.com/googlecolab/colabtools/issues/3055#issuecomment-1244019010" ]
1,662,744,415,000
1,663,004,750,000
1,662,993,125,000
NONE
null
## Describe the bug When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable. ## Steps to reproduce the bug ```python import datasets data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True) ``` ## Expected results Dataset should load as iterator. ## Actual results ``` [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1737 # Return iterable dataset in case of streaming 1738 if streaming: -> 1739 return builder_instance.as_streaming_dataset(split=split) 1740 1741 # Some datasets are already processed on the HF google storage [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path) 1023 ) 1024 self._check_manual_download(dl_manager) -> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 1026 # By default, return all splits 1027 if split is None: [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split) 267 # for streaming case 268 def _download_audio_archives(dl_manager, lang, format, split): --> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split) 270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths] [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split) 251 n_files_path = dl_manager.download(n_files_url) 252 --> 253 with open(n_files_path, "r", encoding="utf-8") as file: 254 n_files = int(file.read().strip()) # the file contains a number of archives 255 ValueError: I/O operation on closed file. ``` ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4961/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4960/comments
https://api.github.com/repos/huggingface/datasets/issues/4960/events
https://github.com/huggingface/datasets/issues/4960
1,368,035,159
I_kwDODunzps5Rio9X
4,960
BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema'
{ "login": "DSLituiev", "id": 8426290, "node_id": "MDQ6VXNlcjg0MjYyOTA=", "avatar_url": "https://avatars.githubusercontent.com/u/8426290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DSLituiev", "html_url": "https://github.com/DSLituiev", "followers_url": "https://api.github.com/users/DSLituiev/followers", "following_url": "https://api.github.com/users/DSLituiev/following{/other_user}", "gists_url": "https://api.github.com/users/DSLituiev/gists{/gist_id}", "starred_url": "https://api.github.com/users/DSLituiev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DSLituiev/subscriptions", "organizations_url": "https://api.github.com/users/DSLituiev/orgs", "repos_url": "https://api.github.com/users/DSLituiev/repos", "events_url": "https://api.github.com/users/DSLituiev/events{/privacy}", "received_events_url": "https://api.github.com/users/DSLituiev/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "Following worked:\r\n\r\n```\r\ndata_dir = \"/Users/dlituiev/repos/datasets/bioasq/\"\r\nbioasq_task_b = load_dataset(\"aps/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioasq_9b_source`); how can this be generalized to other datasets?\r\n- providing an actionable error message that lists available `name` values? I only got available `name` values once I've provided something there (`name=\"aps/bioasq_task_b\"`), before it would not even mention that it requires `name` argument", "Hi ! In general the list of available configurations is prompted. I think this is an issue with this specific dataset.\r\n\r\nFeel free to open a new discussions at https://huggingface.co/datasets/aps/bioasq_task_b/discussions\r\n\r\ncc @apsdehal\r\n\r\nIn particular it sounds like the `BUILDER_CONFIG_CLASS= BigBioConfig ` class attribute is missing and the _info should account for schema being None and raise an error" ]
1,662,739,603,000
1,663,059,063,000
null
NONE
null
## Describe the bug I am trying to load a dataset from drive and running into an error. ## Steps to reproduce the bug ```python data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) ``` ## Actual results `AttributeError: 'BuilderConfig' object has no attribute 'schema'` <details> ``` Using custom data configuration default-a1ca3e05be5abf2f --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [8], in <cell line: 2>() 1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" ----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1720 ignore_verifications = ignore_verifications or save_infos 1722 # Create a dataset builder -> 1723 builder_instance = load_dataset_builder( 1724 path=path, 1725 name=name, 1726 data_dir=data_dir, 1727 data_files=data_files, 1728 cache_dir=cache_dir, 1729 features=features, 1730 download_config=download_config, 1731 download_mode=download_mode, 1732 revision=revision, 1733 use_auth_token=use_auth_token, 1734 **config_kwargs, 1735 ) 1737 # Return iterable dataset in case of streaming 1738 if streaming: File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1523 raise ValueError(error_msg) 1525 # Instantiate the dataset builder -> 1526 builder_instance: DatasetBuilder = builder_cls( 1527 cache_dir=cache_dir, 1528 config_name=config_name, 1529 data_dir=data_dir, 1530 data_files=data_files, 1531 hash=hash, 1532 features=features, 1533 use_auth_token=use_auth_token, 1534 **builder_kwargs, 1535 **config_kwargs, 1536 ) 1538 return builder_instance File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1153 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1154 super().__init__(*args, **kwargs) 1155 # Batch size used by the ArrowWriter 1156 # It defines the number of samples that are kept in memory before writing them 1157 # and also the length of the arrow chunks 1158 # None means that the ArrowWriter will use its default value 1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 305 if info is None: 306 info = self.get_exported_dataset_info() --> 307 info.update(self._info()) 308 info.builder_name = self.name 309 info.config_name = self.config.name File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self) 474 def _info(self): 475 476 # BioASQ Task B source schema --> 477 if self.config.schema == "source": 478 features = datasets.Features( 479 { 480 "id": datasets.Value("string"), (...) 504 } 505 ) 506 # simplified schema for QA tasks AttributeError: 'BuilderConfig' object has no attribute 'schema' ``` </details> ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4960/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4959/comments
https://api.github.com/repos/huggingface/datasets/issues/4959/events
https://github.com/huggingface/datasets/pull/4959
1,367,924,429
PR_kwDODunzps4-rx6l
4,959
Fix data URLs of compguesswhat dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,734,170,000
1,662,739,294,000
1,662,739,144,000
MEMBER
null
After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them: - https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1 This PR updates their data URLs in our loading script. Related to: - #3191
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4959/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4959", "html_url": "https://github.com/huggingface/datasets/pull/4959", "diff_url": "https://github.com/huggingface/datasets/pull/4959.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4959.patch", "merged_at": 1662739144000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4958/comments
https://api.github.com/repos/huggingface/datasets/issues/4958/events
https://github.com/huggingface/datasets/issues/4958
1,367,695,376
I_kwDODunzps5RhWAQ
4,958
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py
{ "login": "hasakikiki", "id": 66322047, "node_id": "MDQ6VXNlcjY2MzIyMDQ3", "avatar_url": "https://avatars.githubusercontent.com/u/66322047?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasakikiki", "html_url": "https://github.com/hasakikiki", "followers_url": "https://api.github.com/users/hasakikiki/followers", "following_url": "https://api.github.com/users/hasakikiki/following{/other_user}", "gists_url": "https://api.github.com/users/hasakikiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasakikiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasakikiki/subscriptions", "organizations_url": "https://api.github.com/users/hasakikiki/orgs", "repos_url": "https://api.github.com/users/hasakikiki/repos", "events_url": "https://api.github.com/users/hasakikiki/events{/privacy}", "received_events_url": "https://api.github.com/users/hasakikiki/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I have solved this problem... The extension of the file should be `.json` not `.jsonl`" ]
1,662,722,995,000
1,662,723,524,000
1,662,723,524,000
NONE
null
Hi, When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version. ``` ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"))) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4958/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4957/comments
https://api.github.com/repos/huggingface/datasets/issues/4957/events
https://github.com/huggingface/datasets/pull/4957
1,366,532,849
PR_kwDODunzps4-nGIk
4,957
Add `Dataset.from_generator`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I restarted the builder PR job just in case", "_The documentation is not available anymore as the PR was closed or merged._", "CI is now green. https://github.com/huggingface/doc-builder/pull/296 explains why it failed." ]
1,662,649,705,000
1,663,339,595,000
1,663,339,458,000
CONTRIBUTOR
null
Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism. Closes https://github.com/huggingface/datasets/issues/4417
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4957/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4957/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4957", "html_url": "https://github.com/huggingface/datasets/pull/4957", "diff_url": "https://github.com/huggingface/datasets/pull/4957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4957.patch", "merged_at": 1663339458000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4956/comments
https://api.github.com/repos/huggingface/datasets/issues/4956/events
https://github.com/huggingface/datasets/pull/4956
1,366,475,160
PR_kwDODunzps4-m5NU
4,956
Fix TF tests for 2.10
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,647,950,000
1,662,650,211,000
1,662,650,084,000
MEMBER
null
Fixes #4953
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4956/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4956", "html_url": "https://github.com/huggingface/datasets/pull/4956", "diff_url": "https://github.com/huggingface/datasets/pull/4956.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4956.patch", "merged_at": 1662650084000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4955/comments
https://api.github.com/repos/huggingface/datasets/issues/4955/events
https://github.com/huggingface/datasets/issues/4955
1,366,382,314
I_kwDODunzps5RcVbq
4,955
Raise a more precise error when the URL is unreachable in streaming mode
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,662,645,157,000
1,662,645,216,000
null
CONTRIBUTOR
null
See for example: - https://github.com/huggingface/datasets/issues/3191 - https://github.com/huggingface/datasets/issues/3186 It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently: - https://huggingface.co/datasets/compguesswhat <img width="1029" alt="Capture d’écran 2022-09-08 aΜ€ 15 51 37" src="https://user-images.githubusercontent.com/1676121/189139946-6deffb91-f21b-4281-8825-a98026c69740.png"> - https://huggingface.co/datasets/nli_tr <img width="1032" alt="Capture d’écran 2022-09-08 aΜ€ 15 51 44" src="https://user-images.githubusercontent.com/1676121/189139963-d26490ed-ad23-48ea-9cfc-1ab9c4d08d0c.png"> cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4955/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4954/comments
https://api.github.com/repos/huggingface/datasets/issues/4954/events
https://github.com/huggingface/datasets/pull/4954
1,366,369,682
PR_kwDODunzps4-mhl5
4,954
Pin TensorFlow temporarily
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,644,775,000
1,662,646,353,000
1,662,646,203,000
MEMBER
null
Temporarily fix TensorFlow until a permanent solution is found. Related to: - #4953
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4954/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4954", "html_url": "https://github.com/huggingface/datasets/pull/4954", "diff_url": "https://github.com/huggingface/datasets/pull/4954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4954.patch", "merged_at": 1662646203000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4953/comments
https://api.github.com/repos/huggingface/datasets/issues/4953/events
https://github.com/huggingface/datasets/issues/4953
1,366,356,514
I_kwDODunzps5RcPIi
4,953
CI test of TensorFlow is failing
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,662,644,369,000
1,662,650,085,000
1,662,650,085,000
MEMBER
null
## Describe the bug The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError: ``` Details: ``` _________________________ TempSeedTest.test_tensorflow _________________________ [gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow> @require_tf def test_tensorflow(self): import tensorflow as tf from tensorflow.keras import layers def gen_random_output(): model = layers.Dense(2) x = tf.random.uniform((1, 3)) return model(x).numpy() with temp_seed(42, set_tensorflow=True): out1 = gen_random_output() with temp_seed(42, set_tensorflow=True): out2 = gen_random_output() out3 = gen_random_output() > np.testing.assert_equal(out1, out2) E AssertionError: E Arrays are not equal E E Mismatched elements: 2 / 2 (100%) E Max absolute difference: 0.84619296 E Max relative difference: 16.083529 E x: array([[-0.793581, 0.333286]], dtype=float32) E y: array([[0.052612, 0.539708]], dtype=float32) tests/test_py_utils.py:149: AssertionError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4953/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4952/comments
https://api.github.com/repos/huggingface/datasets/issues/4952/events
https://github.com/huggingface/datasets/pull/4952
1,366,354,604
PR_kwDODunzps4-meM0
4,952
Add test-datasets CI job
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Closing this one since the dataset scripts will be removed in https://github.com/huggingface/datasets/pull/4974" ]
1,662,644,310,000
1,663,334,882,000
1,663,334,748,000
MEMBER
null
To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog test does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts This also makes `pip install -e .[dev]` much smaller for developers WDYT @albertvillanova ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4952/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4952", "html_url": "https://github.com/huggingface/datasets/pull/4952", "diff_url": "https://github.com/huggingface/datasets/pull/4952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4952.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4951/comments
https://api.github.com/repos/huggingface/datasets/issues/4951/events
https://github.com/huggingface/datasets/pull/4951
1,365,954,814
PR_kwDODunzps4-lDqd
4,951
Fix license information in qasc dataset card
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,631,479,000
1,662,648,887,000
1,662,648,725,000
MEMBER
null
This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0: - https://github.com/allenai/qasc/issues/5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4951/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4951", "html_url": "https://github.com/huggingface/datasets/pull/4951", "diff_url": "https://github.com/huggingface/datasets/pull/4951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4951.patch", "merged_at": 1662648725000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4950/comments
https://api.github.com/repos/huggingface/datasets/issues/4950/events
https://github.com/huggingface/datasets/pull/4950
1,365,458,633
PR_kwDODunzps4-jWZ1
4,950
Update Enwik8 broken link and information
{ "login": "mtanghu", "id": 54819091, "node_id": "MDQ6VXNlcjU0ODE5MDkx", "avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mtanghu", "html_url": "https://github.com/mtanghu", "followers_url": "https://api.github.com/users/mtanghu/followers", "following_url": "https://api.github.com/users/mtanghu/following{/other_user}", "gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}", "starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions", "organizations_url": "https://api.github.com/users/mtanghu/orgs", "repos_url": "https://api.github.com/users/mtanghu/repos", "events_url": "https://api.github.com/users/mtanghu/events{/privacy}", "received_events_url": "https://api.github.com/users/mtanghu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,606,900,000
1,662,648,810,000
1,662,648,660,000
CONTRIBUTOR
null
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4950/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4950", "html_url": "https://github.com/huggingface/datasets/pull/4950", "diff_url": "https://github.com/huggingface/datasets/pull/4950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4950.patch", "merged_at": 1662648660000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4949/comments
https://api.github.com/repos/huggingface/datasets/issues/4949/events
https://github.com/huggingface/datasets/pull/4949
1,365,251,916
PR_kwDODunzps4-iqzI
4,949
Update enwik8 fixing the broken link
{ "login": "mtanghu", "id": 54819091, "node_id": "MDQ6VXNlcjU0ODE5MDkx", "avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mtanghu", "html_url": "https://github.com/mtanghu", "followers_url": "https://api.github.com/users/mtanghu/followers", "following_url": "https://api.github.com/users/mtanghu/following{/other_user}", "gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}", "starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions", "organizations_url": "https://api.github.com/users/mtanghu/orgs", "repos_url": "https://api.github.com/users/mtanghu/repos", "events_url": "https://api.github.com/users/mtanghu/events{/privacy}", "received_events_url": "https://api.github.com/users/mtanghu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closing pull request to following contributing guidelines of making a new branch and will make a new pull request" ]
1,662,589,034,000
1,662,606,844,000
1,662,606,844,000
CONTRIBUTOR
null
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4949/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4949", "html_url": "https://github.com/huggingface/datasets/pull/4949", "diff_url": "https://github.com/huggingface/datasets/pull/4949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4949.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4948/comments
https://api.github.com/repos/huggingface/datasets/issues/4948/events
https://github.com/huggingface/datasets/pull/4948
1,364,973,778
PR_kwDODunzps4-hwsl
4,948
Fix minor typo in error message for missing imports
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,571,251,000
1,662,649,171,000
1,662,649,035,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4948/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4948", "html_url": "https://github.com/huggingface/datasets/pull/4948", "diff_url": "https://github.com/huggingface/datasets/pull/4948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4948.patch", "merged_at": 1662649035000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4947/comments
https://api.github.com/repos/huggingface/datasets/issues/4947/events
https://github.com/huggingface/datasets/pull/4947
1,364,967,957
PR_kwDODunzps4-hvbq
4,947
Try to fix the Windows CI after TF update 2.10
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4947). All of your documentation changes will be reflected on that endpoint." ]
1,662,570,889,000
1,662,628,390,000
1,662,628,390,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4947/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4947", "html_url": "https://github.com/huggingface/datasets/pull/4947", "diff_url": "https://github.com/huggingface/datasets/pull/4947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4947.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4946/comments
https://api.github.com/repos/huggingface/datasets/issues/4946/events
https://github.com/huggingface/datasets/pull/4946
1,364,692,069
PR_kwDODunzps4-g0Hz
4,946
Introduce regex check when pushing as well
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Let me take over this PR if you don't mind" ]
1,662,558,358,000
1,663,064,341,000
1,663,064,194,000
MEMBER
null
Closes https://github.com/huggingface/datasets/issues/4945 by adding a regex check when pushing to hub. Let me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4946/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4946", "html_url": "https://github.com/huggingface/datasets/pull/4946", "diff_url": "https://github.com/huggingface/datasets/pull/4946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4946.patch", "merged_at": 1663064194000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4945/comments
https://api.github.com/repos/huggingface/datasets/issues/4945/events
https://github.com/huggingface/datasets/issues/4945
1,364,691,096
I_kwDODunzps5RV4iY
4,945
Push to hub can push splits that do not respect the regex
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,662,558,317,000
1,663,064,195,000
1,663,064,195,000
MEMBER
null
## Describe the bug The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing. ## Steps to reproduce the bug ```python >>> from datasets import Dataset, DatasetDict, load_dataset >>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]}) >>> di = DatasetDict() >>> di['identifier-with-column'] = d >>> di.push_to_hub('open-source-metrics/test') Pushing split identifier-with-column to the Hub. Pushing dataset shards to the dataset hub: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:04<00:00, 4.40s/it] ``` Loading it afterwards: ```python >>> load_dataset('open-source-metrics/test') Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 610/610 [00:00<00:00, 432kB/s] Using custom data configuration open-source-metrics--test-28b63ec7cde80488 Downloading and preparing dataset None/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to /home/lysandre/.cache/huggingface/datasets/open-source-metrics___parquet/open-source-metrics--test-28b63ec7cde80488/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec... Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 950/950 [00:00<00:00, 1.01MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.48s/it] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 2291.97it/s] Traceback (most recent call last): File "/home/lysandre/.pyenv/versions/3.10.6/lib/python3.10/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 771, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 48, in _split_generators splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files})) File "<string>", line 5, in __init__ File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 599, in __post_init__ NamedSplit(self.name) # check that it's a valid split name File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 346, in __init__ raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.") ValueError: Split name should match '^\w+(\.\w+)*$' but got 'identifier-with-column'. ``` ## Expected results I would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards. ## Actual results See above ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36 - Python version: 3.10.6 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4945/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4944/comments
https://api.github.com/repos/huggingface/datasets/issues/4944/events
https://github.com/huggingface/datasets/issues/4944
1,364,313,569
I_kwDODunzps5RUcXh
4,944
larger dataset, larger GPU memory in the training phase? Is that correct?
{ "login": "debby1103", "id": 38886373, "node_id": "MDQ6VXNlcjM4ODg2Mzcz", "avatar_url": "https://avatars.githubusercontent.com/u/38886373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/debby1103", "html_url": "https://github.com/debby1103", "followers_url": "https://api.github.com/users/debby1103/followers", "following_url": "https://api.github.com/users/debby1103/following{/other_user}", "gists_url": "https://api.github.com/users/debby1103/gists{/gist_id}", "starred_url": "https://api.github.com/users/debby1103/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/debby1103/subscriptions", "organizations_url": "https://api.github.com/users/debby1103/orgs", "repos_url": "https://api.github.com/users/debby1103/repos", "events_url": "https://api.github.com/users/debby1103/events{/privacy}", "received_events_url": "https://api.github.com/users/debby1103/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "does the trainer save it in GPU? sooo curious... how to fix it", "It's my bad. didn't limit the input length" ]
1,662,540,390,000
1,662,554,098,000
1,662,554,098,000
NONE
null
from datasets import set_caching_enabled set_caching_enabled(False) for ds_name in ["squad","newsqa","nqopen","narrativeqa"]: train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name)) break train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1 trainer = QuestionAnsweringTrainer( #huggingface trainer model=model, args=training_args, train_dataset=train_ds, eval_dataset= None, eval_examples=None, answer_column_name=answer_column, dataset_name="squad", tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, ) with operation 1, the GPU memory increases from 16G to 23G
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4944/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4943/comments
https://api.github.com/repos/huggingface/datasets/issues/4943/events
https://github.com/huggingface/datasets/pull/4943
1,363,967,650
PR_kwDODunzps4-eZd_
4,943
Add splits to MBPP dataset
{ "login": "cwarny", "id": 2788526, "node_id": "MDQ6VXNlcjI3ODg1MjY=", "avatar_url": "https://avatars.githubusercontent.com/u/2788526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cwarny", "html_url": "https://github.com/cwarny", "followers_url": "https://api.github.com/users/cwarny/followers", "following_url": "https://api.github.com/users/cwarny/following{/other_user}", "gists_url": "https://api.github.com/users/cwarny/gists{/gist_id}", "starred_url": "https://api.github.com/users/cwarny/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cwarny/subscriptions", "organizations_url": "https://api.github.com/users/cwarny/orgs", "repos_url": "https://api.github.com/users/cwarny/repos", "events_url": "https://api.github.com/users/cwarny/events{/privacy}", "received_events_url": "https://api.github.com/users/cwarny/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "```\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mbpp\r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0\r\nrootdir: /Users/cwarny/datasets, configfile: setup.cfg\r\ncollected 1 item \r\n\r\ntests/test_dataset_common.py . [100%]\r\n\r\n================================================================================================= 1 passed in 1.12s ==================================================================================================\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_mbpp \r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0\r\nrootdir: /Users/cwarny/datasets, configfile: setup.cfg\r\ncollected 1 item \r\n\r\ntests/test_dataset_common.py . [100%]\r\n\r\n================================================================================================= 1 passed in 0.35s ==================================================================================================\r\n\r\n```", "_The documentation is not available anymore as the PR was closed or merged._", "Hi @cwarny ! Thanks for adding the correct splits :)\r\n\r\nYou can fix the CI error by running `make style` - this should reformat the dataset script", "done" ]
1,662,513,511,000
1,663,072,159,000
1,663,072,041,000
CONTRIBUTOR
null
This PR addresses https://github.com/huggingface/datasets/issues/4795
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4943/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4943", "html_url": "https://github.com/huggingface/datasets/pull/4943", "diff_url": "https://github.com/huggingface/datasets/pull/4943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4943.patch", "merged_at": 1663072041000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4942/comments
https://api.github.com/repos/huggingface/datasets/issues/4942/events
https://github.com/huggingface/datasets/issues/4942
1,363,869,421
I_kwDODunzps5RSv7t
4,942
Trec Dataset has incorrect labels
{ "login": "wmpauli", "id": 6539145, "node_id": "MDQ6VXNlcjY1MzkxNDU=", "avatar_url": "https://avatars.githubusercontent.com/u/6539145?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wmpauli", "html_url": "https://github.com/wmpauli", "followers_url": "https://api.github.com/users/wmpauli/followers", "following_url": "https://api.github.com/users/wmpauli/following{/other_user}", "gists_url": "https://api.github.com/users/wmpauli/gists{/gist_id}", "starred_url": "https://api.github.com/users/wmpauli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wmpauli/subscriptions", "organizations_url": "https://api.github.com/users/wmpauli/orgs", "repos_url": "https://api.github.com/users/wmpauli/repos", "events_url": "https://api.github.com/users/wmpauli/events{/privacy}", "received_events_url": "https://api.github.com/users/wmpauli/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @wmpauli. \r\n\r\nIndeed we recently fixed this issue:\r\n- #4801 \r\n\r\nThe fix will be accessible after our next library release. In the meantime, you can have it by passing `revision=\"main\"` to `load_dataset`." ]
1,662,502,420,000
1,662,635,523,000
1,662,635,523,000
NONE
null
## Describe the bug Both coarse and fine labels seem to be out of line. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = "trec" raw_datasets = load_dataset(dataset) df = pd.DataFrame(raw_datasets["test"]) df.head() ``` ## Expected results text (string) | coarse_label (class label) | fine_label (class label) -- | -- | -- How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist) What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city) Who was Galileo ? | 3 (HUM) | 31 (HUM:desc) What is an atom ? | 2 (DESC) | 24 (DESC:def) When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date) ## Actual results index | label-coarse |label-fine | text -- |-- | -- | -- 0 | 4 | 40 | How far is it from Denver to Aspen ? 1 | 5 | 21 | What county is Modesto , California in ? 2 | 3 | 12 | Who was Galileo ? 3 | 0 | 7 | What is an atom ? 4 | 4 | 8 | When did Hawaii become a state ? ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4942/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4941/comments
https://api.github.com/repos/huggingface/datasets/issues/4941/events
https://github.com/huggingface/datasets/pull/4941
1,363,622,861
PR_kwDODunzps4-dQ9F
4,941
Add Papers with Code ID to scifact dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,486,397,000
1,662,488,897,000
1,662,488,761,000
MEMBER
null
This PR: - adds Papers with Code ID - forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4941/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4941", "html_url": "https://github.com/huggingface/datasets/pull/4941", "diff_url": "https://github.com/huggingface/datasets/pull/4941.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4941.patch", "merged_at": 1662488761000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4940/comments
https://api.github.com/repos/huggingface/datasets/issues/4940/events
https://github.com/huggingface/datasets/pull/4940
1,363,513,058
PR_kwDODunzps4-c6WY
4,940
Fix multilinguality tag and missing sections in xquad_r dataset card
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,480,335,000
1,662,977,467,000
1,662,977,328,000
MEMBER
null
This PR fixes issue reported on the Hub: - Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4940/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4940", "html_url": "https://github.com/huggingface/datasets/pull/4940", "diff_url": "https://github.com/huggingface/datasets/pull/4940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4940.patch", "merged_at": 1662977328000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4939/comments
https://api.github.com/repos/huggingface/datasets/issues/4939/events
https://github.com/huggingface/datasets/pull/4939
1,363,468,679
PR_kwDODunzps4-cw4A
4,939
Fix NonMatchingChecksumError in adv_glue dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,662,478,276,000
1,662,486,130,000
1,662,485,956,000
MEMBER
null
Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4939/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4939", "html_url": "https://github.com/huggingface/datasets/pull/4939", "diff_url": "https://github.com/huggingface/datasets/pull/4939.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4939.patch", "merged_at": 1662485956000 }
true
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0