url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
758M
1.95B
node_id
stringlengths
18
32
number
int64
1.2k
6.31k
title
stringlengths
1
290
user
dict
labels
listlengths
0
3
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2484/comments
https://api.github.com/repos/huggingface/datasets/issues/2484/events
https://github.com/huggingface/datasets/issues/2484
919,092,635
MDU6SXNzdWU5MTkwOTI2MzU=
2,484
Implement loading a dataset builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[ "#self-assign" ]
"2021-06-11T18:47:22"
"2021-07-05T10:45:57"
"2021-07-05T10:45:57"
MEMBER
null
null
null
As discussed with @stas00 and @lhoestq, this would allow things like: ```python from datasets import load_dataset_builder dataset_name = "openwebtext" builder = load_dataset_builder(dataset_name) print(builder.cache_dir) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2484/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2484/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1803/comments
https://api.github.com/repos/huggingface/datasets/issues/1803/events
https://github.com/huggingface/datasets/issues/1803
798,243,904
MDU6SXNzdWU3OTgyNDM5MDQ=
1,803
Querying examples from big datasets is slower than small datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Hello, @lhoestq / @gaceladri : We have been seeing similar behavior with bigger datasets, where querying time increases. Are you folks aware of any solution that fixes this problem yet? ", "Hi ! I'm pretty sure that it can be fixed by using the Arrow IPC file format instead of the raw streaming format but I haven't tested yet.\r\nI'll take a look at it soon and let you know", "My workaround is to shard the dataset into splits in my ssd disk and feed the data in different training sessions. But it is a bit of a pain when we need to reload the last training session with the rest of the split with the Trainer in transformers.\r\n\r\nI mean, when I split the training and then reloads the model and optimizer, it not gets the correct global_status of the optimizer, so I need to hardcode some things. I'm planning to open an issue in transformers and think about it.\r\n```\r\nfrom datasets import load_dataset\r\n\r\nbook_corpus = load_dataset(\"bookcorpus\", split=\"train[:25%]\")\r\nwikicorpus = load_dataset(\"wikicorpus\", split=\"train[:25%]\")\r\nopenwebtext = load_dataset(\"openwebtext\", split=\"train[:25%]\")\r\n\r\nbig_dataset = datasets.concatenate_datasets([wikicorpus, openwebtext, book_corpus])\r\nbig_dataset.shuffle(seed=42)\r\nbig_dataset = big_dataset.map(encode, batched=True, num_proc=20, load_from_cache_file=True, writer_batch_size=5000)\r\nbig_dataset.set_format(type='torch', columns=[\"text\", \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./linear_bert\",\r\n overwrite_output_dir=True,\r\n per_device_train_batch_size=71,\r\n save_steps=500,\r\n save_total_limit=10,\r\n logging_first_step=True,\r\n logging_steps=100,\r\n gradient_accumulation_steps=9,\r\n fp16=True,\r\n dataloader_num_workers=20,\r\n warmup_steps=24000,\r\n learning_rate=0.000545205002870214,\r\n adam_epsilon=1e-6,\r\n adam_beta2=0.98,\r\n weight_decay=0.01,\r\n max_steps=138974, # the total number of steps after concatenating 100% datasets\r\n max_grad_norm=1.0,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=big_dataset,\r\n tokenizer=tokenizer))\r\n```\r\n\r\nI do one training pass with the total steps of this shard and I use len(bbig)/batchsize to stop the training (hardcoded in the trainer.py) when I pass over all the examples in this split.\r\n\r\nNow Im working, I will edit the comment with a more elaborated answer when I left the work.", "I just tested and using the Arrow File format doesn't improve the speed... This will need further investigation.\r\n\r\nMy guess is that it has to iterate over the record batches or chunks of a ChunkedArray in order to retrieve elements.\r\n\r\nHowever if we know in advance in which chunk the element is, and at what index it is, then we can access it instantaneously. But this requires dealing with the chunked arrays instead of the pyarrow Table directly which is not practical.", "I have a dataset with about 2.7 million rows (which I'm loading via `load_from_disk`), and I need to fetch around 300k (particular) rows of it, by index. Currently this is taking a really long time (~8 hours). I tried sharding the large dataset but overall it doesn't change how long it takes to fetch the desired rows.\r\n\r\nI actually have enough RAM that I could fit the large dataset in memory. Would having the large dataset in memory speed up querying? To find out, I tried to load (a column of) the large dataset into memory like this:\r\n```\r\ncolumn_data = large_ds['column_name']\r\n```\r\nbut in itself this takes a really long time.\r\n\r\nI'm pretty stuck - do you have any ideas what I should do? ", "Hi ! Feel free to post a message on the [forum](https://discuss.huggingface.co/c/datasets/10). I'd be happy to help you with this.\r\n\r\nIn your post on the forum, feel free to add more details about your setup:\r\nWhat are column names and types of your dataset ?\r\nHow was the dataset constructed ?\r\nIs the dataset shuffled ?\r\nIs the dataset tokenized ?\r\nAre you on a SSD or an HDD ?\r\n\r\nI'm sure we can figure something out.\r\nFor example on my laptop I can access the 6 millions articles from wikipedia in less than a minute.", "Thanks @lhoestq, I've [posted on the forum](https://discuss.huggingface.co/t/fetching-rows-of-a-large-dataset-by-index/4271?u=abisee).", "Fixed by #2122." ]
"2021-02-01T11:08:23"
"2021-08-04T18:11:01"
"2021-08-04T18:10:42"
MEMBER
null
null
null
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorpus", split="train[:100%]") %timeit _ = b1[-1] # 12.2 µs ± 70.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) %timeit _ = b50[-1] # 92.5 µs ± 1.24 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) %timeit _ = b100[-1] # 177 µs ± 3.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) ``` It looks like the time to fetch the example increases with the size of the dataset. This is maybe due to the use of the Arrow streaming format to store the data on disk. I guess pyarrow needs to iterate through the file as a stream to find the queried sample. Maybe switching to the Arrow IPC file format could help fixing this issue. Indeed according to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample, which could fix the issue: > We define a “file format” supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer. cc @gaceladri since it can help speed up your training when this one is fixed.
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/1803/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1803/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1327/comments
https://api.github.com/repos/huggingface/datasets/issues/1327/events
https://github.com/huggingface/datasets/pull/1327
759,629,321
MDExOlB1bGxSZXF1ZXN0NTM0NjAxNDM3
1,327
Add msr_genomics_kbcomp dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manandey", "id": 6687858, "login": "manandey", "node_id": "MDQ6VXNlcjY2ODc4NTg=", "organizations_url": "https://api.github.com/users/manandey/orgs", "received_events_url": "https://api.github.com/users/manandey/received_events", "repos_url": "https://api.github.com/users/manandey/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "type": "User", "url": "https://api.github.com/users/manandey" }
[]
closed
false
null
[]
null
[]
"2020-12-08T17:18:20"
"2020-12-08T18:18:32"
"2020-12-08T18:18:06"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1327.diff", "html_url": "https://github.com/huggingface/datasets/pull/1327", "merged_at": "2020-12-08T18:18:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1327.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1327" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1327/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1327/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2574/comments
https://api.github.com/repos/huggingface/datasets/issues/2574/events
https://github.com/huggingface/datasets/pull/2574
934,632,378
MDExOlB1bGxSZXF1ZXN0NjgxNjczMzYy
2,574
Add streaming in load a dataset docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-07-01T09:32:53"
"2021-07-01T14:12:22"
"2021-07-01T14:12:21"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2574.diff", "html_url": "https://github.com/huggingface/datasets/pull/2574", "merged_at": "2021-07-01T14:12:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/2574.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2574" }
Mention dataset streaming on the "loading a dataset" page of the documentation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2574/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2574/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4059/comments
https://api.github.com/repos/huggingface/datasets/issues/4059/events
https://github.com/huggingface/datasets/pull/4059
1,186,149,949
PR_kwDODunzps41TC-o
4,059
Load GitHub datasets from Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Currently the github datasets versioning is synced with the `datasets` lib versioning: when you load a github dataset using `datasets==x.y.z`, then the version of the dataset will be the one at the git tag `x.y.z`. This is for reproducibility reasons.\r\n\r\nWe could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. It could be nice to think about tools that will allow backward compatibility if we ever need to to a breaking change in some datasets. Maybe a way to specify which revision of the dataset to use based on the `datasets` major version.\r\n\r\nIf we keep this behavior, then maybe add a note in setup.py to push to PyPI only after the `Update Hub repositories` CI job is done. It can take a few minutes to add the version tag to all the dataset repositories on the Hub. If we push to PyPI before the tags are pushed, then some users might get some 404 if at the same time they installed `datasets` and run `load_dataset`.", "@lhoestq I was going to increase the `max_retries` as done for metrics:\r\n- #4063 \r\n\r\nBut then I realized that loading from the Hub would work as well. That is why I opened this PR.\r\n\r\nDefinitely, we should decide which behavior we want:\r\n- We have been working in the direction of eliminating the distinctions between canonical/community datasets\r\n- If we continue to go in that direction, then passing (or not passing) `revision` should have the same behavior for canonical/community\r\n- If we want to continue to tight the library version with the canonical datasets version, that is definitely a difference between canonical and community datasets\r\n\r\nNot sure what could be better in the long term...", "> We could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. \r\n\r\nNot sure of understanding this. Previous versions of the `datasets` library will continue to download GitHub datasets from GitHub, syncing library/dataset versions... Where is the problem?", "Yes you're right, previous versions of `datasets` will still continue to download from github, but not future versions.\r\nIf we release `datasets` 2.1 by removing this behavior and if one day we release `datasets` 3.0 with a breaking change in the dataset scripts, then all version >=2.1 will break.", "Ideally we should drop the differences between github datasets and community datasets, and maybe provide a way to fallback on an older version of a dataset repository if the user's `datasets` version is too old and incompatible with it.", "I just noticed I literally opened the same PR lol\r\n\r\nI'm still convinced that we should do a better version compatibility check but we can see that later IMO", "Normally in open source projects, when there is a duplicate PR, the latter is tagged as \"duplicate\" and closed. :stuck_out_tongue_winking_eye: \r\n\r\nLet me make things clear in my mind: so you say that the blocking point that was preventing this PR from merging, now is no longer a blocking point and could be addresses in a subsequent PR?", "Let me close the duplicate one, sorry\r\n\r\n> Let me make things clear my mind: so you say that the blocking point that was preventing this PR from merging now is no longer a blocking point and could be addresses in a subsequent PR?\r\n\r\nYes 🙈", "> Note that after this PR, all the changes made to a dataset will affect all the datasets version from now on\r\n\r\nYes, we have aligned this behavior with Hub datasets, as this is already the case for Hub datasets." ]
"2022-03-30T09:21:56"
"2022-09-16T12:43:26"
"2022-09-16T12:40:43"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4059.diff", "html_url": "https://github.com/huggingface/datasets/pull/4059", "merged_at": "2022-09-16T12:40:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/4059.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4059" }
We have recurrently had connection errors when requesting GitHub because sometimes the site is not available. This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub. Fix #2048 Related to: - #4051 - #3210 - #2787 - #2075 - #2036
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4059/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4059/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1768/comments
https://api.github.com/repos/huggingface/datasets/issues/1768/events
https://github.com/huggingface/datasets/pull/1768
792,150,745
MDExOlB1bGxSZXF1ZXN0NTYwMDgyNzIx
1,768
Mention kwargs in the Dataset Formatting docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[]
"2021-01-22T16:43:20"
"2021-01-31T12:33:10"
"2021-01-25T09:14:59"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1768.diff", "html_url": "https://github.com/huggingface/datasets/pull/1768", "merged_at": "2021-01-25T09:14:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1768.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1768" }
Hi, This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed. To prevent people from having to check the code/method docs, I just added a couple of lines in the docs. Please let me know your thoughts on this. Thanks, Gunjan @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1768/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1768/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3527/comments
https://api.github.com/repos/huggingface/datasets/issues/3527/events
https://github.com/huggingface/datasets/pull/3527
1,093,840,707
PR_kwDODunzps4wiN1w
3,527
Update README.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meg-huggingface", "id": 90473723, "login": "meg-huggingface", "node_id": "MDQ6VXNlcjkwNDczNzIz", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "type": "User", "url": "https://api.github.com/users/meg-huggingface" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meg-huggingface", "id": 90473723, "login": "meg-huggingface", "node_id": "MDQ6VXNlcjkwNDczNzIz", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "type": "User", "url": "https://api.github.com/users/meg-huggingface" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meg-huggingface", "id": 90473723, "login": "meg-huggingface", "node_id": "MDQ6VXNlcjkwNDczNzIz", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "type": "User", "url": "https://api.github.com/users/meg-huggingface" } ]
null
[]
"2022-01-04T23:39:41"
"2022-01-05T00:23:50"
"2022-01-05T00:23:50"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3527.diff", "html_url": "https://github.com/huggingface/datasets/pull/3527", "merged_at": "2022-01-05T00:23:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/3527.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3527" }
Adding licensing information.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3527/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3527/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2957/comments
https://api.github.com/repos/huggingface/datasets/issues/2957/events
https://github.com/huggingface/datasets/issues/2957
1,004,868,337
I_kwDODunzps475RLx
2,957
MultiWOZ Dataset NonMatchingChecksumError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8754873?v=4", "events_url": "https://api.github.com/users/bradyneal/events{/privacy}", "followers_url": "https://api.github.com/users/bradyneal/followers", "following_url": "https://api.github.com/users/bradyneal/following{/other_user}", "gists_url": "https://api.github.com/users/bradyneal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bradyneal", "id": 8754873, "login": "bradyneal", "node_id": "MDQ6VXNlcjg3NTQ4NzM=", "organizations_url": "https://api.github.com/users/bradyneal/orgs", "received_events_url": "https://api.github.com/users/bradyneal/received_events", "repos_url": "https://api.github.com/users/bradyneal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bradyneal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bradyneal/subscriptions", "type": "User", "url": "https://api.github.com/users/bradyneal" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi Brady! I met the similar issue, it stuck in the downloading stage instead of download anything, maybe it is broken. After I change the downloading from URLs to one url of the [Multiwoz project](https://github.com/budzianowski/multiwoz/archive/44f0f8479f11721831c5591b839ad78827da197b.zip) and use dirs to get separate files, the problems gone." ]
"2021-09-22T23:45:00"
"2022-03-15T16:07:02"
"2022-03-15T16:07:02"
NONE
null
null
null
## Describe the bug The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching. ## Steps to reproduce the bug Both of the below dataset versions yield the checksum error: ```python from datasets import load_dataset dataset = load_dataset('multi_woz_v22', 'v2.2') dataset = load_dataset('multi_woz_v22', 'v2.2_active_only') ``` ## Expected results For the above calls to `load_dataset` to work. ## Actual results NonMatchingChecksumError. Traceback: > Traceback (most recent call last): File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-15-4e91280e112e>", line 1, in <module> dataset = load_dataset('multi_woz_v22', 'v2.2') File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/load.py", line 847, in load_dataset builder_instance.download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 615, in download_and_prepare self._download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare verify_checksums( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json'] ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2957/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2957/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2765/comments
https://api.github.com/repos/huggingface/datasets/issues/2765/events
https://github.com/huggingface/datasets/issues/2765
962,861,395
MDU6SXNzdWU5NjI4NjEzOTU=
2,765
BERTScore Error
{ "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gagan3012", "id": 49101362, "login": "gagan3012", "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "repos_url": "https://api.github.com/users/gagan3012/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "type": "User", "url": "https://api.github.com/users/gagan3012" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nThe `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:\r\n```\r\npip uninstall bert-score\r\npip install \"bert-score<0.3.10\"\r\n```" ]
"2021-08-06T15:58:57"
"2021-08-09T11:16:25"
"2021-08-09T11:16:25"
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ``` # Bug `TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Colab - Python version: - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2765/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2765/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4243/comments
https://api.github.com/repos/huggingface/datasets/issues/4243/events
https://github.com/huggingface/datasets/pull/4243
1,217,689,909
PR_kwDODunzps425Gkn
4,243
WIP: Initial shades loading script and readme
{ "avatar_url": "https://avatars.githubusercontent.com/u/69018523?v=4", "events_url": "https://api.github.com/users/shayne-longpre/events{/privacy}", "followers_url": "https://api.github.com/users/shayne-longpre/followers", "following_url": "https://api.github.com/users/shayne-longpre/following{/other_user}", "gists_url": "https://api.github.com/users/shayne-longpre/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shayne-longpre", "id": 69018523, "login": "shayne-longpre", "node_id": "MDQ6VXNlcjY5MDE4NTIz", "organizations_url": "https://api.github.com/users/shayne-longpre/orgs", "received_events_url": "https://api.github.com/users/shayne-longpre/received_events", "repos_url": "https://api.github.com/users/shayne-longpre/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shayne-longpre/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shayne-longpre/subscriptions", "type": "User", "url": "https://api.github.com/users/shayne-longpre" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "Thanks for your contribution, @shayne-longpre.\r\n\r\nAre you still interested in adding this dataset? As we are transferring the dataset scripts from this GitHub repo, we would recommend you to add this to the Hugging Face Hub: https://huggingface.co/datasets" ]
"2022-04-27T17:45:43"
"2022-10-03T09:36:35"
"2022-10-03T09:36:35"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4243.diff", "html_url": "https://github.com/huggingface/datasets/pull/4243", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4243.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4243" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4243/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4243/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2225/comments
https://api.github.com/repos/huggingface/datasets/issues/2225/events
https://github.com/huggingface/datasets/pull/2225
858,469,561
MDExOlB1bGxSZXF1ZXN0NjE1NzAzMTY4
2,225
fixed one instance of 'train' to 'test'
{ "avatar_url": "https://avatars.githubusercontent.com/u/46733535?v=4", "events_url": "https://api.github.com/users/alexwdong/events{/privacy}", "followers_url": "https://api.github.com/users/alexwdong/followers", "following_url": "https://api.github.com/users/alexwdong/following{/other_user}", "gists_url": "https://api.github.com/users/alexwdong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexwdong", "id": 46733535, "login": "alexwdong", "node_id": "MDQ6VXNlcjQ2NzMzNTM1", "organizations_url": "https://api.github.com/users/alexwdong/orgs", "received_events_url": "https://api.github.com/users/alexwdong/received_events", "repos_url": "https://api.github.com/users/alexwdong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexwdong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexwdong/subscriptions", "type": "User", "url": "https://api.github.com/users/alexwdong" }
[]
closed
false
null
[]
null
[ "Thanks ! good catch\r\n\r\nCould you also update the metadata of this dataset ?\r\nYou can do so by running\r\n```\r\ndatasets-cli test ./datasets/newsgroup --all_configs --save_infos --ignore_verifications\r\n```\r\nThis should update the dataset_infos.json file that contains the size of all the splits for example.", "Hi,\r\n`dataset_infos.json` should be updated now.\r\n" ]
"2021-04-15T04:26:40"
"2021-04-15T22:09:50"
"2021-04-15T21:19:09"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2225.diff", "html_url": "https://github.com/huggingface/datasets/pull/2225", "merged_at": "2021-04-15T21:19:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2225.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2225" }
I believe this should be 'test' instead of 'train'
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2225/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2225/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1540/comments
https://api.github.com/repos/huggingface/datasets/issues/1540/events
https://github.com/huggingface/datasets/pull/1540
765,357,702
MDExOlB1bGxSZXF1ZXN0NTM4OTQ1NDc2
1,540
added TTC4900: A Benchmark Data for Turkish Text Categorization dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4", "events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}", "followers_url": "https://api.github.com/users/yavuzKomecoglu/followers", "following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}", "gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yavuzKomecoglu", "id": 5150963, "login": "yavuzKomecoglu", "node_id": "MDQ6VXNlcjUxNTA5NjM=", "organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs", "received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events", "repos_url": "https://api.github.com/users/yavuzKomecoglu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions", "type": "User", "url": "https://api.github.com/users/yavuzKomecoglu" }
[]
closed
false
null
[]
null
[ "@lhoestq, can you help with creating dummy_data?\r\n", "Hi @yavuzKomecoglu did you manage to build the dummy data ?", "> Hi @yavuzKomecoglu did you manage to build the dummy data ?\r\n\r\nHi, sorry for the return. I've created dummy_data.zip manually.", "> Nice thank you !\r\n> \r\n> Before we merge can you fill the two sections of the dataset card I suggested ?\r\n> And also remove one remaining print statement\r\n\r\nI updated your suggestions. Thank you very much for your support.", "I think you accidentally pushed the readme of another dataset (name_to_nation).\r\nI removed it so you have to `git pull`\r\n\r\nBecause of that I guess your changes about the ttc4900 was not included.\r\nFeel free to ping me once they're added\r\n\r\n\r\n", "> I think you accidentally pushed the readme of another dataset (name_to_nation).\r\n> I removed it so you have to `git pull`\r\n> \r\n> Because of that I guess your changes about the ttc4900 was not included.\r\n> Feel free to ping me once they're added\r\n\r\nI did `git pull` and updated readme **ttc4900**.", "merging since the Ci is fixed on master" ]
"2020-12-13T12:43:33"
"2020-12-18T10:09:01"
"2020-12-18T10:09:01"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1540.diff", "html_url": "https://github.com/huggingface/datasets/pull/1540", "merged_at": "2020-12-18T10:09:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/1540.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1540" }
This PR adds the TTC4900 dataset which is a Turkish Text Categorization dataset by me and @basakbuluz. Homepage: [https://www.kaggle.com/savasy/ttc4900](https://www.kaggle.com/savasy/ttc4900) Point of Contact: [Savaş Yıldırım](mailto:savasy@gmail.com) / @savasy
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1540/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1540/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1420
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1420/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1420/comments
https://api.github.com/repos/huggingface/datasets/issues/1420/events
https://github.com/huggingface/datasets/pull/1420
760,700,388
MDExOlB1bGxSZXF1ZXN0NTM1NDg4MTM5
1,420
Add dataset yoruba_wordsim353
{ "avatar_url": "https://avatars.githubusercontent.com/u/1858628?v=4", "events_url": "https://api.github.com/users/michael-aloys/events{/privacy}", "followers_url": "https://api.github.com/users/michael-aloys/followers", "following_url": "https://api.github.com/users/michael-aloys/following{/other_user}", "gists_url": "https://api.github.com/users/michael-aloys/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/michael-aloys", "id": 1858628, "login": "michael-aloys", "node_id": "MDQ6VXNlcjE4NTg2Mjg=", "organizations_url": "https://api.github.com/users/michael-aloys/orgs", "received_events_url": "https://api.github.com/users/michael-aloys/received_events", "repos_url": "https://api.github.com/users/michael-aloys/repos", "site_admin": false, "starred_url": "https://api.github.com/users/michael-aloys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michael-aloys/subscriptions", "type": "User", "url": "https://api.github.com/users/michael-aloys" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
"2020-12-09T21:54:29"
"2020-12-11T13:34:04"
"2020-12-11T13:34:04"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1420.diff", "html_url": "https://github.com/huggingface/datasets/pull/1420", "merged_at": "2020-12-11T13:34:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/1420.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1420" }
Contains loading script as well as dataset card including YAML tags.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1420/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1420/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2248/comments
https://api.github.com/repos/huggingface/datasets/issues/2248/events
https://github.com/huggingface/datasets/pull/2248
864,853,447
MDExOlB1bGxSZXF1ZXN0NjIxMDEyNzg5
2,248
Implement Dataset to JSON
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
{ "closed_at": "2021-05-31T16:20:53Z", "closed_issues": 3, "created_at": "2021-04-09T13:16:31Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-05-14T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/3", "id": 6644287, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "open_issues": 0, "state": "closed", "title": "1.7", "updated_at": "2021-05-31T16:20:53Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/3" }
[]
"2021-04-22T11:46:51"
"2021-04-27T15:29:21"
"2021-04-27T15:29:20"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2248.diff", "html_url": "https://github.com/huggingface/datasets/pull/2248", "merged_at": "2021-04-27T15:29:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/2248.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2248" }
Implement `Dataset.to_json`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2248/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2248/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3215/comments
https://api.github.com/repos/huggingface/datasets/issues/3215/events
https://github.com/huggingface/datasets/pull/3215
1,045,011,207
PR_kwDODunzps4uGx4o
3,215
Small updates to to_tf_dataset documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
[]
closed
false
null
[]
null
[ "@stevhliu Accepted both suggestions, thanks for the review!" ]
"2021-11-04T17:22:01"
"2021-11-04T18:55:38"
"2021-11-04T18:55:37"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3215.diff", "html_url": "https://github.com/huggingface/datasets/pull/3215", "merged_at": "2021-11-04T18:55:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3215.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3215" }
I added a little more description about `to_tf_dataset` compared to just setting the format
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3215/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3215/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1809/comments
https://api.github.com/repos/huggingface/datasets/issues/1809/events
https://github.com/huggingface/datasets/pull/1809
799,059,141
MDExOlB1bGxSZXF1ZXN0NTY1NzY4ODQz
1,809
Add FreebaseQA dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[ "Hi ! It looks like this PR contains changes about other datasets than freebase_qa such as DuoRC.\r\n\r\nCan you remove these changes please ?", "Hi @lhoestq,\r\n\r\nI think this happened because of rebasing. I'm unable to remove the duorc commit from the branch. GEM, Arabic sarcasm datasets are also there. I can't see any merge conflicts, however. Before commiting I always rebase (shouldn't have done that).\r\nCan you explain what is to be done? Should I create a clean PR?", "Hi @gchhablani \r\nI think you can simply create another branch and another PR.\r\n\r\nIf I understand correctly the github diff is messed up because you rebased instead of merge.\r\nRebasing is supposed to be used only before pushing the branch the first time, or github messes up the diff.\r\nIf you want to include changes from master on a branch that is already push you need to use git merge.", "Thanks @lhoestq.\r\n\r\nI understand the issue now. I missed the instructions on the template. Sorry for bothering you unnecessarily, I'm pretty new to contributing on GitHub. I'll make a fresh PR.\r\n", "No problem, I'm not a big fan of this weird behavior tbh.\r\nThanks for making a new PR", "@lhoestq Haha, well, it's not as weird as not reading the [instructions](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#open-a-pull-request-on-the-main-huggingface-repo-and-share-your-work).\r\nAlso, I'm enjoying adding new datasets so it's all cool :)" ]
"2021-02-02T08:35:53"
"2021-02-03T17:15:05"
"2021-02-03T16:43:06"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1809.diff", "html_url": "https://github.com/huggingface/datasets/pull/1809", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1809.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1809" }
Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR. Requesting @lhoestq to review.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1809/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1809/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1758/comments
https://api.github.com/repos/huggingface/datasets/issues/1758/events
https://github.com/huggingface/datasets/issues/1758
790,626,116
MDU6SXNzdWU3OTA2MjYxMTY=
1,758
dataset.search() (elastic) cannot reliably retrieve search results
{ "avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4", "events_url": "https://api.github.com/users/afogarty85/events{/privacy}", "followers_url": "https://api.github.com/users/afogarty85/followers", "following_url": "https://api.github.com/users/afogarty85/following{/other_user}", "gists_url": "https://api.github.com/users/afogarty85/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/afogarty85", "id": 49048309, "login": "afogarty85", "node_id": "MDQ6VXNlcjQ5MDQ4MzA5", "organizations_url": "https://api.github.com/users/afogarty85/orgs", "received_events_url": "https://api.github.com/users/afogarty85/received_events", "repos_url": "https://api.github.com/users/afogarty85/repos", "site_admin": false, "starred_url": "https://api.github.com/users/afogarty85/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afogarty85/subscriptions", "type": "User", "url": "https://api.github.com/users/afogarty85" }
[]
closed
false
null
[]
null
[ "Hi !\r\nI tried your code on my side and I was able to workaround this issue by waiting a few seconds before querying the index.\r\nMaybe this is because the index is not updated yet on the ElasticSearch side ?", "Thanks for the feedback! I added a 30 second \"sleep\" and that seemed to work well!" ]
"2021-01-21T02:26:37"
"2021-01-22T00:25:50"
"2021-01-22T00:25:50"
NONE
null
null
null
I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices. The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer. I am indexing data that looks like the following from the HF SQuAD 2.0 data set: ``` ['57318658e6313a140071d02b', '56f7165e3d8e2e1400e3733a', '570e2f6e0b85d914000d7d21', '5727e58aff5b5019007d97d0', '5a3b5a503ff257001ab8441f', '57262fab271a42140099d725'] ``` To reproduce the issue, try: ``` from datasets import load_dataset, load_metric from transformers import BertTokenizerFast, BertForQuestionAnswering from elasticsearch import Elasticsearch import numpy as np import collections from tqdm.auto import tqdm import torch # from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv- tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') max_length = 384 # The maximum length of a feature (question and context) doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed. pad_on_right = tokenizer.padding_side == "right" squad_v2 = True # from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv- def prepare_validation_features(examples): # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # We keep the example_id that gave us this feature and we will store the offset mappings. tokenized_examples["example_id"] = [] for i in range(len(tokenized_examples["input_ids"])): # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token # position is part of the context or not. tokenized_examples["offset_mapping"][i] = [ (list(o) if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples # build base examples, features set of training data shuffled_idx = pd.read_csv('https://raw.githubusercontent.com/afogarty85/temp/main/idx.csv')['idx'].to_list() examples = load_dataset("squad_v2").shuffle(seed=1)['train'] features = load_dataset("squad_v2").shuffle(seed=1)['train'].map( prepare_validation_features, batched=True, remove_columns=['answers', 'context', 'id', 'question', 'title']) # reorder features by the training process features = features.select(indices=shuffled_idx) # get the example ids to match with the "example" data; get unique entries id_list = list(dict.fromkeys(features['example_id'])) # now search for their index positions in the examples data set; load elastic search es = Elasticsearch([{'host': 'localhost'}]).ping() # add an index to the id column for the examples examples.add_elasticsearch_index(column='id') # retrieve the example index example_idx_k1 = [examples.search(index_name='id', query=i, k=1).indices for i in id_list] example_idx_k1 = [item for sublist in example_idx_k1 for item in sublist] example_idx_k2 = [examples.search(index_name='id', query=i, k=3).indices for i in id_list] example_idx_k2 = [item for sublist in example_idx_k2 for item in sublist] len(example_idx_k1) # should be 130319 len(example_idx_k2) # should be 130319 #trial 1 lengths: # k=1: 130314 # k=3: 130319 # trial 2: # just run k=3 first: 130310 # try k=1 after k=3: 130319 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1758/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1758/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4752/comments
https://api.github.com/repos/huggingface/datasets/issues/4752/events
https://github.com/huggingface/datasets/issues/4752
1,319,464,409
I_kwDODunzps5OpW3Z
4,752
DatasetInfo issue when testing multiple configs: mixed task_templates
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "I've narrowed down the issue to the `dataset_module_factory` which already creates a `dataset_infos.json` file down in the `.cache/modules/dataset_modules/..` folder. That JSON file already contains the wrong task_templates for `unfiltered`.", "Ugh. Found the issue: apparently `datasets` was reusing the already existing `dataset_infos.json` that is inside `datasets/datasets/hebban-reviews`! Is this desired behavior?\r\n\r\nPerhaps when `--save_infos` and `--all_configs` are given, an existing `dataset_infos.json` file should first be deleted before continuing with the test? Because that would assume that the user wants to create a new infos file for all configs anyway.", "Hi! I think this is a reasonable solution. Would you be interested in submitting a PR?" ]
"2022-07-27T12:04:54"
"2022-08-08T18:20:50"
null
CONTRIBUTOR
null
null
null
## Describe the bug When running the `datasets-cli test` it would seem that some config properties in a DatasetInfo get mangled, leading to issues, e.g., about the ClassLabel. ## Steps to reproduce the bug In summary, what I want to do is create three configs: - unfiltered: no classlabel, no tasks. Gets data from unfiltered.json.gz (I'd want this without splits, just one chunk of data, but that does not seem possible?) - filtered_sentiment: `review_sentiment` as ClassLabel, TextClassification task with `review_sentiment` as label. Gets train/test split from respective json.gz files - filtered_rating: `review_rating0` as ClassLabel, TextClassification task with `review_rating0` as label. Gets train/test split from respective json.gz files This might be a bit tedious to reproduce, so I am sorry, but these are the steps: - Clone datasets -> `datasets/` and install it - Clone `https://huggingface.co/datasets/BramVanroy/hebban-reviews` into `datasets/datasets` so that you have a new folder `datasets/datasets/hebban-reviews/`. - Replace the HebbanReviews class with this new one: ```python class HebbanReviews(datasets.GeneratorBasedBuilder): """The Hebban book reviews dataset.""" BUILDER_CONFIGS = [ HebbanReviewsConfig( name="unfiltered", description=_HEBBAN_REVIEWS_UNFILTERED_DESCRIPTION, version=datasets.Version(_HEBBAN_VERSION) ), HebbanReviewsConfig( name="filtered_sentiment", description=f"This config has the negative, neutral, and positive sentiment scores as ClassLabel in the 'review_sentiment' column.\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}", version=datasets.Version(_HEBBAN_VERSION) ), HebbanReviewsConfig( name="filtered_rating", description=f"This config has the 5-class ratings as ClassLabel in the 'review_rating0' column (which is a variant of 'review_rating' that starts counting from 0 instead of 1).\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}", version=datasets.Version(_HEBBAN_VERSION) ) ] DEFAULT_CONFIG_NAME = "filtered_sentiment" _URLS = { "train": "train.jsonl.gz", "test": "test.jsonl.gz", "unfiltered": "unfiltered.jsonl.gz", } def _info(self): features = { "review_title": datasets.Value("string"), "review_text": datasets.Value("string"), "review_text_without_quotes": datasets.Value("string"), "review_n_quotes": datasets.Value("int32"), "review_n_tokens": datasets.Value("int32"), "review_rating": datasets.Value("int32"), "review_rating0": datasets.Value("int32"), "review_author_url": datasets.Value("string"), "review_author_type": datasets.Value("string"), "review_n_likes": datasets.Value("int32"), "review_n_comments": datasets.Value("int32"), "review_url": datasets.Value("string"), "review_published_date": datasets.Value("string"), "review_crawl_date": datasets.Value("string"), "lid": datasets.Value("string"), "lid_probability": datasets.Value("float32"), "review_sentiment": datasets.features.ClassLabel(names=["negative", "neutral", "positive"]), "review_sentiment_label": datasets.Value("string"), "book_id": datasets.Value("int32"), } if self.config.name == "filtered_sentiment": task_templates = [datasets.TextClassification(text_column="review_text_without_quotes", label_column="review_sentiment")] elif self.config.name == "filtered_rating": # For CrossEntropy, our classes need to start at index 0 -- not 1 features["review_rating0"] = datasets.features.ClassLabel(names=["1", "2", "3", "4", "5"]) features["review_sentiment"] = datasets.Value("int32") task_templates = [datasets.TextClassification(text_column="review_text_without_quotes", label_column="review_rating0")] elif self.config.name == "unfiltered": # no ClassLabels in unfiltered features["review_sentiment"] = datasets.Value("int32") task_templates = None else: raise ValueError(f"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default)," f" 'filtered_rating', or 'unfiltered'") print("AT INFO", self.config.name, task_templates) return datasets.DatasetInfo( description=self.config.description, features=datasets.Features(features), homepage="https://huggingface.co/datasets/BramVanroy/hebban-reviews", citation=_HEBBAN_REVIEWS_CITATION, task_templates=task_templates, license="cc-by-4.0" ) def _split_generators(self, dl_manager): if self.config.name.startswith("filtered"): files = dl_manager.download_and_extract({"train": "train.jsonl.gz", "test": "test.jsonl.gz"}) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "data_file": files["train"] }, ), datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "data_file": files["test"] }, ), ] elif self.config.name == "unfiltered": files = dl_manager.download_and_extract({"train": "unfiltered.jsonl.gz"}) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "data_file": files["train"] }, ), ] else: raise ValueError(f"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default)," f" 'filtered_rating', or 'unfiltered'") def _generate_examples(self, data_file): lines = Path(data_file).open(encoding="utf-8").readlines() for line_idx, line in enumerate(lines): row = json.loads(line) yield line_idx, row ``` - finally, run `datasets-cli test ./datasets/hebban-reviews/ --save_infos --all_configs` from within the topmost `datasets` directory ## Expected results Succeeding tests for three different configs. ## Actual results I printed out the values that are given to `DatasetInfo` for config name and task_templates, as you can see. There, as expected, I get `unfiltered None`. I also modified datasets/info.py and added this line [at L.170](https://github.com/huggingface/datasets/blob/f5847a304aa1b38b3a3c54a8318b4df60f1299bc/src/datasets/info.py#L170): ```python print("INTERNALLY AT INFO.PY", self.config_name, self.task_templates) ``` to my surprise, here I get `unfiltered [TextClassification(task='text-classification', text_column='review_text_without_quotes', label_column='review_sentiment')]`. So one way or another, here I suddenly see that `unfiltered` now does have a task_template -- even though that is not what is written in the data loading script, as the first print statement correctly shows. I do not quite understand how, but it seems that the config name and task_templates get mixed. This ultimately leads to the following error, but this trace may not be very useful in itself: ``` Traceback (most recent call last): File "C:\Users\bramv\.virtualenvs\hebban-U6poXNQd\Scripts\datasets-cli-script.py", line 33, in <module> sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')()) File "c:\dev\python\hebban\datasets\src\datasets\commands\datasets_cli.py", line 39, in main service.run() File "c:\dev\python\hebban\datasets\src\datasets\commands\test.py", line 144, in run builder.as_dataset() File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 899, in as_dataset datasets = map_nested( File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 393, in map_nested mapped = [ File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 394, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 330, in _single_map_nested return function(data_struct) File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 930, in _build_single_dataset ds = self._as_dataset( File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 1006, in _as_dataset return Dataset(fingerprint=fingerprint, **dataset_kwargs) File "c:\dev\python\hebban\datasets\src\datasets\arrow_dataset.py", line 661, in __init__ info = info.copy() if info is not None else DatasetInfo() File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 286, in copy return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) File "<string>", line 20, in __init__ File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 176, in __post_init__ self.task_templates = [ File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 177, in <listcomp> template.align_with_features(self.features) for template in (self.task_templates) File "c:\dev\python\hebban\datasets\src\datasets\tasks\text_classification.py", line 22, in align_with_features raise ValueError(f"Column {self.label_column} is not a ClassLabel.") ValueError: Column review_sentiment is not a ClassLabel. ``` ## Environment info - `datasets` version: 2.4.1.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.8 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4752/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4752/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4667/comments
https://api.github.com/repos/huggingface/datasets/issues/4667/events
https://github.com/huggingface/datasets/issues/4667
1,299,735,703
I_kwDODunzps5NeGSX
4,667
Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed
{ "avatar_url": "https://avatars.githubusercontent.com/u/21364546?v=4", "events_url": "https://api.github.com/users/hungnmai/events{/privacy}", "followers_url": "https://api.github.com/users/hungnmai/followers", "following_url": "https://api.github.com/users/hungnmai/following{/other_user}", "gists_url": "https://api.github.com/users/hungnmai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hungnmai", "id": 21364546, "login": "hungnmai", "node_id": "MDQ6VXNlcjIxMzY0NTQ2", "organizations_url": "https://api.github.com/users/hungnmai/orgs", "received_events_url": "https://api.github.com/users/hungnmai/received_events", "repos_url": "https://api.github.com/users/hungnmai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hungnmai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hungnmai/subscriptions", "type": "User", "url": "https://api.github.com/users/hungnmai" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[]
"2022-07-09T18:03:15"
"2022-07-11T07:47:15"
"2022-07-11T07:47:15"
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4667/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4667/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4780/comments
https://api.github.com/repos/huggingface/datasets/issues/4780/events
https://github.com/huggingface/datasets/pull/4780
1,326,034,767
PR_kwDODunzps48g9oA
4,780
Remove apache_beam import from module level in natural_questions dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-08-02T15:34:54"
"2022-08-02T16:16:33"
"2022-08-02T16:03:17"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4780.diff", "html_url": "https://github.com/huggingface/datasets/pull/4780", "merged_at": "2022-08-02T16:03:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/4780.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4780" }
Instead of importing `apache_beam` at the module level, import it in the method `_build_pcollection`. Fix #4779.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4780/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4780/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2486/comments
https://api.github.com/repos/huggingface/datasets/issues/2486/events
https://github.com/huggingface/datasets/pull/2486
919,174,898
MDExOlB1bGxSZXF1ZXN0NjY4NTI2Njg3
2,486
Add Rico Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ncoop57", "id": 7613470, "login": "ncoop57", "node_id": "MDQ6VXNlcjc2MTM0NzA=", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "repos_url": "https://api.github.com/users/ncoop57/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "type": "User", "url": "https://api.github.com/users/ncoop57" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for adding this dataset :)\r\n\r\nRegarding your questions:\r\n1. We can have them as different configuration of the `rico` dataset\r\n2. Yes please use the path to the image and not open the image directly, so that we can let users open the image one at at time during training if they want to for example. In the future we'll have an Image feature type that will decode the encoded image data on the fly when accessing the examples.\r\n3. Feel free to keep the hierarchies as strings if they don't follow a fixed format\r\n4. You can just return the path\r\n\r\n", "Thanks for your contribution, @ncoop57. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
"2021-06-11T20:17:41"
"2022-10-03T09:38:18"
"2022-10-03T09:38:18"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2486.diff", "html_url": "https://github.com/huggingface/datasets/pull/2486", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2486.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2486" }
Hi there! I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib. 1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset? You can see the datasets available for Rico here: http://interactionmining.org/rico 2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset? 2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image? 3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently? 4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string? I appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: !
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2486/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2486/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5120/comments
https://api.github.com/repos/huggingface/datasets/issues/5120/events
https://github.com/huggingface/datasets/pull/5120
1,410,641,221
PR_kwDODunzps5A4X10
5,120
Fix `tqdm` zip bug
{ "avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4", "events_url": "https://api.github.com/users/david1542/events{/privacy}", "followers_url": "https://api.github.com/users/david1542/followers", "following_url": "https://api.github.com/users/david1542/following{/other_user}", "gists_url": "https://api.github.com/users/david1542/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/david1542", "id": 9879252, "login": "david1542", "node_id": "MDQ6VXNlcjk4NzkyNTI=", "organizations_url": "https://api.github.com/users/david1542/orgs", "received_events_url": "https://api.github.com/users/david1542/received_events", "repos_url": "https://api.github.com/users/david1542/repos", "site_admin": false, "starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david1542/subscriptions", "type": "User", "url": "https://api.github.com/users/david1542" }
[]
closed
false
null
[]
null
[ "@albertvillanova Thanks for your comment. What do you think about creating 2 `pbar` for each case? I see the `pbar_iterable` is initialized differently. Maybe `pbar` can also be initialized like that.", "@albertvillanova Another solution I implemented is to change `pbar_iterable` and add the `zip` to it. I updated the PR with this solution. Let me know what you think.", "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova Done :) Let me know what you think.", "@albertvillanova Thanks :) I also don't see an easy way to test this. This was just a problem in the way `tqdm` was used. I'm not sure we should cover it in tests.", "Hi, \r\n\r\nFirst of all, thanks for this PR. \r\nIt's the first time I join a discussion on GitHUB on problem resolution in libraries such as transformers, so I hope I comply to the best practices for an efficient communication...\r\n\r\nI am running `AutoTokenizer.from_pretrained` in a Google Colab notebook for using with BERT base. \r\nI am experiencing issue [5117](https://github.com/huggingface/datasets/issues/5117).\r\n\r\nEach time I run my notebook, I do:\r\n\r\n`! pip install transformers \r\n! pip install datasets \r\n! pip install huggingface_hub`\r\n\r\nAs I understand, the issue has been resolved and the solution merged to the released version of the code?\r\nSo I expect that the bug is resolved in my notebook, however this is not the case.\r\n\r\nDo I get something wrong? \r\nDo I have to implement some change in the source code myself?\r\n\r\nThanks in advance for your help!", "@Cochonaki Hi :) The problem was fixed but there wasn't a release since then. I believe a new release should come out in the upcoming weeks. Maybe someone from the core maintainers can answer that :)\r\n\r\ncc: @albertvillanova ", "Baby Haiti Coffee SE is born\n\nNH watch\n\nOn Sun, Oct 23, 2022 at 02:39 Dudu Lasry ***@***.***> wrote:\n\n> @Cochonaki <https://github.com/Cochonaki> Hi :) The problem was fixed but\n> there wasn't a release since then. I believe a new release should come out\n> in the upcoming weeks. Maybe someone from the core maintainers can answer\n> that :)\n>\n> cc: @albertvillanova <https://github.com/albertvillanova>\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5120#issuecomment-1288024546>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAB4E2NCT7QO7W3PTQGDIKDWETMQ7ANCNFSM6AAAAAARGRBY2M>\n> .\n> You are receiving this because you are subscribed to this thread.Message\n> ID: ***@***.***>\n>\n", "Hi, @Cochonaki.\r\n\r\nAs @david1542 pointed out, we have not made a release since this bug was fixed. We will make one in the following weeks.\r\n\r\nIn the meantime, if you would like to incorporate the bug fix, you can install `datasets` from this repo main branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```", "Thanks a lot @albertvillanova and @david1542, it works now!\r\nI am really thankful for your help, that encourages me to participate more in this community.\r\nSee you around!", "Welcome!!! 🤗" ]
"2022-10-16T22:19:18"
"2022-10-23T10:27:53"
"2022-10-19T08:53:17"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5120.diff", "html_url": "https://github.com/huggingface/datasets/pull/5120", "merged_at": "2022-10-19T08:53:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5120.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5120" }
This PR solves #5117, by wrapping the entire `zip` clause in tqdm. For more information, please checkout this Stack Overflow thread: https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5120/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5120/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5730/comments
https://api.github.com/repos/huggingface/datasets/issues/5730/events
https://github.com/huggingface/datasets/issues/5730
1,662,007,926
I_kwDODunzps5jEDp2
5,730
CI is broken: ValueError: Name (mock) already in the registry and clobber is False
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-04-11T08:29:46"
"2023-04-11T08:47:56"
"2023-04-11T08:47:56"
MEMBER
null
null
null
CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948 ``` =========================== short test summary info ============================ ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare_reload - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_dataset_dict.py::test_dummy_datasetdict_serialize_fs - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_file_utils.py::test_get_from_cache_fsspec - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_filesystem.py::test_is_remote_filesystem - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[tmp_path-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level/second_level/date=2019-10-01-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path/file.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://top_level-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://dir_that_doesnt_exist-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[tmp_path/file.txt-100] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://-0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://top_level/second_level/date=2019-10-01/a.parquet-100] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[tmp_path/*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xwalk[tmp_path-expected_outputs0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xwalk[mock://top_level/second_level-expected_outputs1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]/*-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ===== 2105 passed, 18 skipped, 38 warnings, 46 errors in 236.22s (0:03:56) ===== ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5730/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5730/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6054/comments
https://api.github.com/repos/huggingface/datasets/issues/6054/events
https://github.com/huggingface/datasets/issues/6054
1,813,271,304
I_kwDODunzps5sFFMI
6,054
Multi-processed `Dataset.map` slows down a lot when `import torch`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4", "events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}", "followers_url": "https://api.github.com/users/ShinoharaHare/followers", "following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}", "gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ShinoharaHare", "id": 47121592, "login": "ShinoharaHare", "node_id": "MDQ6VXNlcjQ3MTIxNTky", "organizations_url": "https://api.github.com/users/ShinoharaHare/orgs", "received_events_url": "https://api.github.com/users/ShinoharaHare/received_events", "repos_url": "https://api.github.com/users/ShinoharaHare/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions", "type": "User", "url": "https://api.github.com/users/ShinoharaHare" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
null
[]
null
[ "A duplicate of https://github.com/huggingface/datasets/issues/5929" ]
"2023-07-20T06:36:14"
"2023-07-21T15:19:37"
"2023-07-21T15:19:37"
NONE
null
null
null
### Describe the bug When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it. I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result. BTW, `import lightning` also slows it down. Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times. - without `import torch` ![image](https://github.com/huggingface/datasets/assets/47121592/0233055a-ced4-424a-9f0f-32a2afd802c2) - with `import torch` ![image](https://github.com/huggingface/datasets/assets/47121592/463eafb7-b81e-4eb9-91ca-fd7fe20f3d59) ### Steps to reproduce the bug Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon. ```python3 from datasets import load_from_disk, disable_caching from transformers import AutoTokenizer # import torch # import lightning def rearrange_datapoints( batch, tokenizer, sequence_length, ): datapoints = [] input_ids = [] for x in batch['input_ids']: input_ids += x while len(input_ids) >= sequence_length: datapoint = input_ids[:sequence_length] datapoints.append(datapoint) input_ids[:sequence_length] = [] if input_ids: paddings = [-1] * (sequence_length - len(input_ids)) datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings datapoints.append(datapoint) batch['input_ids'] = datapoints return batch if __name__ == '__main__': disable_caching() tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False) dataset = load_from_disk('...') dataset = dataset.map( rearrange_datapoints, fn_kwargs=dict( tokenizer=tokenizer, sequence_length=2048, ), batched=True, num_proc=8, ) ``` ### Expected behavior The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6054/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2779
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2779/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2779/comments
https://api.github.com/repos/huggingface/datasets/issues/2779/events
https://github.com/huggingface/datasets/pull/2779
964,775,085
MDExOlB1bGxSZXF1ZXN0NzA3MTgwNTgw
2,779
Fix sacrebleu tokenizers
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-08-10T09:24:27"
"2021-08-10T11:03:08"
"2021-08-10T10:57:54"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2779.diff", "html_url": "https://github.com/huggingface/datasets/pull/2779", "merged_at": "2021-08-10T10:57:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/2779.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2779" }
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()`. Eventually, this should be further fixed in order to use only public functions. This is a partial hotfix of #2781.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2779/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2779/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4849/comments
https://api.github.com/repos/huggingface/datasets/issues/4849/events
https://github.com/huggingface/datasets/pull/4849
1,338,273,900
PR_kwDODunzps49JN8d
4,849
1.18.x
{ "avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4", "events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}", "followers_url": "https://api.github.com/users/Mr-Robot-001/followers", "following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}", "gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mr-Robot-001", "id": 49282718, "login": "Mr-Robot-001", "node_id": "MDQ6VXNlcjQ5MjgyNzE4", "organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs", "received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events", "repos_url": "https://api.github.com/users/Mr-Robot-001/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions", "type": "User", "url": "https://api.github.com/users/Mr-Robot-001" }
[]
closed
false
null
[]
null
[]
"2022-08-14T15:09:19"
"2022-08-14T15:10:02"
"2022-08-14T15:10:02"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4849.diff", "html_url": "https://github.com/huggingface/datasets/pull/4849", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4849.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4849" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4849/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4849/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6074/comments
https://api.github.com/repos/huggingface/datasets/issues/6074/events
https://github.com/huggingface/datasets/pull/6074
1,822,299,128
PR_kwDODunzps5Wb8O_
6,074
Misc doc improvements
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006616 / 0.011353 (-0.004737) | 0.003915 / 0.011008 (-0.007093) | 0.083271 / 0.038508 (0.044763) | 0.072595 / 0.023109 (0.049485) | 0.307224 / 0.275898 (0.031326) | 0.337244 / 0.323480 (0.013764) | 0.005296 / 0.007986 (-0.002690) | 0.003325 / 0.004328 (-0.001003) | 0.064589 / 0.004250 (0.060339) | 0.056369 / 0.037052 (0.019316) | 0.310829 / 0.258489 (0.052340) | 0.345563 / 0.293841 (0.051722) | 0.030551 / 0.128546 (-0.097995) | 0.008519 / 0.075646 (-0.067127) | 0.286368 / 0.419271 (-0.132903) | 0.052498 / 0.043533 (0.008966) | 0.308735 / 0.255139 (0.053596) | 0.329234 / 0.283200 (0.046034) | 0.022588 / 0.141683 (-0.119095) | 1.453135 / 1.452155 (0.000981) | 1.525956 / 1.492716 (0.033239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199417 / 0.018006 (0.181410) | 0.454621 / 0.000490 (0.454131) | 0.004928 / 0.000200 (0.004728) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028436 / 0.037411 (-0.008975) | 0.083722 / 0.014526 (0.069196) | 0.095162 / 0.176557 (-0.081395) | 0.153434 / 0.737135 (-0.583702) | 0.099480 / 0.296338 (-0.196859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384647 / 0.215209 (0.169438) | 3.838406 / 2.077655 (1.760751) | 1.891267 / 1.504120 (0.387148) | 1.751432 / 1.541195 (0.210238) | 1.737443 / 1.468490 (0.268953) | 0.487758 / 4.584777 (-4.097019) | 3.635925 / 3.745712 (-0.109787) | 5.208718 / 5.269862 (-0.061144) | 3.029374 / 4.565676 (-1.536302) | 0.057613 / 0.424275 (-0.366662) | 0.007177 / 0.007607 (-0.000430) | 0.455596 / 0.226044 (0.229552) | 4.559969 / 2.268929 (2.291040) | 2.325321 / 55.444624 (-53.119303) | 2.034924 / 6.876477 (-4.841552) | 2.163869 / 2.142072 (0.021796) | 0.583477 / 4.805227 (-4.221750) | 0.132870 / 6.500664 (-6.367795) | 0.059618 / 0.075469 (-0.015851) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263751 / 1.841788 (-0.578037) | 19.740004 / 8.074308 (11.665696) | 14.410980 / 10.191392 (4.219588) | 0.170367 / 0.680424 (-0.510057) | 0.018225 / 0.534201 (-0.515976) | 0.390101 / 0.579283 (-0.189182) | 0.404298 / 0.434364 (-0.030066) | 0.455295 / 0.540337 (-0.085043) | 0.621179 / 1.386936 (-0.765757) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006580 / 0.011353 (-0.004773) | 0.004078 / 0.011008 (-0.006930) | 0.065842 / 0.038508 (0.027334) | 0.074494 / 0.023109 (0.051385) | 0.403644 / 0.275898 (0.127746) | 0.430204 / 0.323480 (0.106724) | 0.005343 / 0.007986 (-0.002643) | 0.003366 / 0.004328 (-0.000963) | 0.064858 / 0.004250 (0.060607) | 0.056252 / 0.037052 (0.019200) | 0.412556 / 0.258489 (0.154067) | 0.434099 / 0.293841 (0.140258) | 0.031518 / 0.128546 (-0.097028) | 0.008543 / 0.075646 (-0.067104) | 0.071658 / 0.419271 (-0.347613) | 0.049962 / 0.043533 (0.006430) | 0.398511 / 0.255139 (0.143372) | 0.415908 / 0.283200 (0.132708) | 0.025011 / 0.141683 (-0.116672) | 1.492350 / 1.452155 (0.040195) | 1.552996 / 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204971 / 0.018006 (0.186964) | 0.439965 / 0.000490 (0.439475) | 0.002071 / 0.000200 (0.001872) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031673 / 0.037411 (-0.005738) | 0.087529 / 0.014526 (0.073004) | 0.099882 / 0.176557 (-0.076675) | 0.156994 / 0.737135 (-0.580141) | 0.101421 / 0.296338 (-0.194918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407480 / 0.215209 (0.192271) | 4.069123 / 2.077655 (1.991468) | 2.081288 / 1.504120 (0.577169) | 1.920367 / 1.541195 (0.379172) | 1.981053 / 1.468490 (0.512563) | 0.481995 / 4.584777 (-4.102782) | 3.546486 / 3.745712 (-0.199226) | 5.133150 / 5.269862 (-0.136712) | 3.056444 / 4.565676 (-1.509232) | 0.056650 / 0.424275 (-0.367625) | 0.007746 / 0.007607 (0.000139) | 0.490891 / 0.226044 (0.264847) | 4.902160 / 2.268929 (2.633232) | 2.564726 / 55.444624 (-52.879899) | 2.234988 / 6.876477 (-4.641489) | 2.387656 / 2.142072 (0.245583) | 0.576315 / 4.805227 (-4.228912) | 0.132065 / 6.500664 (-6.368599) | 0.060728 / 0.075469 (-0.014741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.370568 / 1.841788 (-0.471220) | 19.883159 / 8.074308 (11.808851) | 14.442066 / 10.191392 (4.250674) | 0.150119 / 0.680424 (-0.530305) | 0.018359 / 0.534201 (-0.515842) | 0.394128 / 0.579283 (-0.185155) | 0.411697 / 0.434364 (-0.022667) | 0.460580 / 0.540337 (-0.079757) | 0.608490 / 1.386936 (-0.778446) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#035d0cf842b82b14059999baa78e8d158dfbed12 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "merging now if you don't mind - this way I can make a patch release" ]
"2023-07-26T12:20:54"
"2023-07-27T16:16:28"
"2023-07-27T16:16:02"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6074.diff", "html_url": "https://github.com/huggingface/datasets/pull/6074", "merged_at": "2023-07-27T16:16:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/6074.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6074" }
Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6074/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3274/comments
https://api.github.com/repos/huggingface/datasets/issues/3274/events
https://github.com/huggingface/datasets/pull/3274
1,053,689,140
PR_kwDODunzps4uiL8-
3,274
Fix some contact information formats
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The CI fail are caused by some missing sections or tags, which is unrelated to this PR. Merging !" ]
"2021-11-15T13:50:34"
"2021-11-15T14:43:55"
"2021-11-15T14:43:54"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3274.diff", "html_url": "https://github.com/huggingface/datasets/pull/3274", "merged_at": "2021-11-15T14:43:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/3274.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3274" }
As reported in https://github.com/huggingface/datasets/issues/3188 some contact information are not displayed correctly. This PR fixes this for CoNLL-2002 and some other datasets with the same issue
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3274/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3274/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3448/comments
https://api.github.com/repos/huggingface/datasets/issues/3448/events
https://github.com/huggingface/datasets/issues/3448
1,083,231,080
I_kwDODunzps5AkMto
3,448
JSONDecodeError with HuggingFace dataset viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/57716109?v=4", "events_url": "https://api.github.com/users/kathrynchapman/events{/privacy}", "followers_url": "https://api.github.com/users/kathrynchapman/followers", "following_url": "https://api.github.com/users/kathrynchapman/following{/other_user}", "gists_url": "https://api.github.com/users/kathrynchapman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kathrynchapman", "id": 57716109, "login": "kathrynchapman", "node_id": "MDQ6VXNlcjU3NzE2MTA5", "organizations_url": "https://api.github.com/users/kathrynchapman/orgs", "received_events_url": "https://api.github.com/users/kathrynchapman/received_events", "repos_url": "https://api.github.com/users/kathrynchapman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kathrynchapman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kathrynchapman/subscriptions", "type": "User", "url": "https://api.github.com/users/kathrynchapman" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
null
[ "Hi ! I think the issue comes from the dataset_infos.json file: it has the \"flat\" field twice.\r\n\r\nCan you try deleting this file and regenerating it please ?", "Thanks! That fixed that, but now I am getting:\r\nServer Error\r\nStatus code: 400\r\nException: KeyError\r\nMessage: 'feature'\r\n\r\nI checked the dataset_infos.json and pubmed_neg.py script, I don't use 'feature' anywhere as a key. Is the dataset viewer expecting that I do?", "It seems that the `feature` key is missing from some feature type definition in your dataset_infos.json:\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t}\r\n```\r\nThey should be\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n \"feature\": {\"dtype\": \"string\", \"id\": null, \"_type\": \"Value\"}\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\",\r\n \"feature\": {\"num_classes\": 5, \"names\": [\"-\", \"S\", \"H\", \"N\", \"C\"], \"names_file\": null, \"id\": null, \"_type\": \"ClassLabel\"}\r\n\t\t\t}\r\n```\r\n\r\nNote that you can generate the dataset_infos.json automatically to avoid mistakes:\r\n```bash\r\ndatasets-cli test ./path/to/dataset --save_infos\r\n```" ]
"2021-12-17T12:52:41"
"2022-02-24T09:10:26"
"2022-02-24T09:10:26"
NONE
null
null
null
## Dataset viewer issue for 'pubmed_neg' **Link:** https://huggingface.co/datasets/IGESML/pubmed_neg I am getting the error: Status code: 400 Exception: JSONDecodeError Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202) I have checked all files - I am not using single quotes anywhere. Not sure what is causing this issue. Am I the one who added this dataset ? Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3448/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3448/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3777/comments
https://api.github.com/repos/huggingface/datasets/issues/3777/events
https://github.com/huggingface/datasets/pull/3777
1,147,232,875
PR_kwDODunzps4zTVrz
3,777
Start removing canonical datasets logic
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?", "> I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?\r\n\r\nI added an explanation, let me know if it sounds good to you:\r\n\r\n```\r\nDatasets used to be hosted on our GitHub repository, but all datasets have now been migrated to the Hugging Face Hub.\r\nThe legacy GitHub datasets were added originally on our GitHub repository and therefore don't have a namespace: \"squad\", \"glue\", etc. unlike the other datasets that are named \"username/dataset_name\" or \"org/dataset_name\".\r\n```\r\n", "Thanks for the feedbacks ! Merging this now - if you have some comments I can take care of them in a subsequent PR\r\n\r\nI'll also take care of resolving the conflicts with https://github.com/huggingface/datasets/pull/3690" ]
"2022-02-22T18:23:30"
"2022-02-24T15:04:37"
"2022-02-24T15:04:36"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3777.diff", "html_url": "https://github.com/huggingface/datasets/pull/3777", "merged_at": "2022-02-24T15:04:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3777.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3777" }
I updated the source code and the documentation to start removing the "canonical datasets" logic. Indeed this makes the documentation confusing and we don't want this distinction anymore in the future. Ideally users should share their datasets on the Hub directly. ### Changes - the documentation about dataset loading mentions the datasets on the Hub (no difference between canonical and community, since they all have their own repository now) - the documentation about adding a dataset doesn't explain the technical differences between canonical and community anymore, and only presents how to add a community dataset. There is still a small section at the bottom that mentions the datasets that are still on GitHub and redirects to the `ADD_NEW_DATASET.md` guide on GitHub about how to contribute a dataset to the `datasets` library - the code source doesn't mention "canonical" anymore anywhere. There is still a `GitHubDatasetModuleFactory` class that is left, but I updated the docstring to say that it will be eventually removed in favor of the `HubDatasetModuleFactory` classes that already exist Would love to have your feedbacks on this ! cc @julien-c @thomwolf @SBrandeis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/3777/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3777/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6014/comments
https://api.github.com/repos/huggingface/datasets/issues/6014/events
https://github.com/huggingface/datasets/issues/6014
1,798,213,816
I_kwDODunzps5rLpC4
6,014
Request to Share/Update Dataset Viewer Code
{ "avatar_url": "https://avatars.githubusercontent.com/u/105081034?v=4", "events_url": "https://api.github.com/users/lilyorlilypad/events{/privacy}", "followers_url": "https://api.github.com/users/lilyorlilypad/followers", "following_url": "https://api.github.com/users/lilyorlilypad/following{/other_user}", "gists_url": "https://api.github.com/users/lilyorlilypad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lilyorlilypad", "id": 105081034, "login": "lilyorlilypad", "node_id": "U_kgDOBkNoyg", "organizations_url": "https://api.github.com/users/lilyorlilypad/orgs", "received_events_url": "https://api.github.com/users/lilyorlilypad/received_events", "repos_url": "https://api.github.com/users/lilyorlilypad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lilyorlilypad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lilyorlilypad/subscriptions", "type": "User", "url": "https://api.github.com/users/lilyorlilypad" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
null
[]
null
[ "Hi ! The huggingface/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?", "I think these parts are outdated:\r\n\r\n* https://github.com/huggingface/datasets-viewer/blob/8efad8eae313a891f713469983bf4c744786f26e/run.py#L126-L131\r\n* https://github.com/huggingface/datasets-viewer/blob/8efad8eae313a891f713469983bf4c744786f26e/run.py#L145-L150\r\n\r\nTo make the viewer work, the first one should be replaced with the following:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nconfs = builder_cls.BUILDER_CONFIGS\r\n```\r\nAnd the second one:\r\n```python\r\ndataset_module = datasets.load.dataset_module_factory(path)\r\nbuilder_cls = datasets.load.import_main_class(dataset_module.module_path)\r\nif conf:\r\n builder_instance = builder_cls(name=conf, cache_dir=path if path_to_datasets is not None else None)\r\nelse:\r\n builder_instance = builder_cls(cache_dir=path if path_to_datasets is not None else None)\r\n```\r\n\r\nBut as @lhoestq suggested, it's better to use the `datasets-server` API nowadays to [fetch the rows](https://huggingface.co/docs/datasets-server/rows).", "> The dataset viewer on the Hugging Face website is incredibly useful\r\n\r\n@mariosasko i think @lilyorlilypad wants to run the new dataset-viewer, not the old one", "> wants to run the new dataset-viewer, not the old one\r\n\r\nThanks for the clarification for me. I do want to run the new dataset-viewer. ", "It should be possible to run it locally using the HF datasets-server API (docs [here](https://huggingface.co/docs/datasets-server)) but the front end part is not open source (yet ?)\r\n\r\nThe back-end is open source though if you're interested: https://github.com/huggingface/datasets-server\r\nIt automatically converts datasets on HF to Parquet, which is the format we use to power the viewer.", "the new frontend would probably be hard to open source, as is, as it's quite intertwined with the Hub's code.\r\n\r\nHowever, at some point it would be amazing to have a community-driven open source implementation of a frontend to datasets-server! ", "For the frontend viewer, see https://github.com/huggingface/datasets/issues/6139.\r\n\r\nAlso mentioned in https://github.com/huggingface/datasets-server/issues/213 and https://github.com/huggingface/datasets-server/issues/441\r\n\r\nClosing as a duplicate of https://github.com/huggingface/datasets/issues/6139" ]
"2023-07-11T06:36:09"
"2023-09-25T12:01:27"
"2023-09-25T12:01:17"
NONE
null
null
null
Overview: The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute. Request: I kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code. Thank you for considering this request, and I look forward to your response.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6014/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6014/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1804/comments
https://api.github.com/repos/huggingface/datasets/issues/1804/events
https://github.com/huggingface/datasets/pull/1804
798,483,881
MDExOlB1bGxSZXF1ZXN0NTY1MjkzMTc3
1,804
Add SICK dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4", "events_url": "https://api.github.com/users/calpt/events{/privacy}", "followers_url": "https://api.github.com/users/calpt/followers", "following_url": "https://api.github.com/users/calpt/following{/other_user}", "gists_url": "https://api.github.com/users/calpt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/calpt", "id": 36051308, "login": "calpt", "node_id": "MDQ6VXNlcjM2MDUxMzA4", "organizations_url": "https://api.github.com/users/calpt/orgs", "received_events_url": "https://api.github.com/users/calpt/received_events", "repos_url": "https://api.github.com/users/calpt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/calpt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calpt/subscriptions", "type": "User", "url": "https://api.github.com/users/calpt" }
[]
closed
false
null
[]
null
[]
"2021-02-01T15:57:44"
"2021-02-05T17:46:28"
"2021-02-05T15:49:25"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1804.diff", "html_url": "https://github.com/huggingface/datasets/pull/1804", "merged_at": "2021-02-05T15:49:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/1804.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1804" }
Adds the SICK dataset (http://marcobaroni.org/composes/sick.html). Closes #1772. Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1804/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2340/comments
https://api.github.com/repos/huggingface/datasets/issues/2340/events
https://github.com/huggingface/datasets/pull/2340
882,370,824
MDExOlB1bGxSZXF1ZXN0NjM1OTExNzIx
2,340
More consistent copy logic
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
"2021-05-09T14:17:33"
"2021-05-11T08:58:33"
"2021-05-11T08:58:33"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2340.diff", "html_url": "https://github.com/huggingface/datasets/pull/2340", "merged_at": "2021-05-11T08:58:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/2340.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2340" }
Use `info.copy()` instead of `copy.deepcopy(info)`. `Features.copy` now creates a deep copy.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2340/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2340/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5266/comments
https://api.github.com/repos/huggingface/datasets/issues/5266/events
https://github.com/huggingface/datasets/pull/5266
1,455,281,310
PR_kwDODunzps5DN9BT
5,266
Specify arguments as keywords in librosa.reshape to avoid future errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-11-18T14:58:47"
"2022-11-21T15:45:02"
"2022-11-21T15:41:57"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5266.diff", "html_url": "https://github.com/huggingface/datasets/pull/5266", "merged_at": "2022-11-21T15:41:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5266" }
Fixes a warning and future deprecation from `librosa.reshape`: ``` FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best") ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5266/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3925/comments
https://api.github.com/repos/huggingface/datasets/issues/3925/events
https://github.com/huggingface/datasets/pull/3925
1,169,913,769
PR_kwDODunzps40eaq8
3,925
Fix main_classes docs index
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm it's still not good \r\n![image](https://user-images.githubusercontent.com/42851186/158429361-e19ce25b-c259-4ded-8473-075deafdbb96.png)\r\n\r\nany idea what could cause this ?", "Ok fixed :)" ]
"2022-03-15T16:33:46"
"2022-03-22T13:49:11"
"2022-03-22T13:44:04"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3925.diff", "html_url": "https://github.com/huggingface/datasets/pull/3925", "merged_at": "2022-03-22T13:44:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3925.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3925" }
Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types ![image](https://user-images.githubusercontent.com/42851186/158426345-2ee1ceef-ddf3-4a6f-a93e-d1a8f38a44f5.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3925/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3925/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4728/comments
https://api.github.com/repos/huggingface/datasets/issues/4728/events
https://github.com/huggingface/datasets/issues/4728
1,312,897,454
I_kwDODunzps5OQTmu
4,728
load_dataset gives "403" error when using Financial Phrasebank
{ "avatar_url": "https://avatars.githubusercontent.com/u/2209134?v=4", "events_url": "https://api.github.com/users/rohitvincent/events{/privacy}", "followers_url": "https://api.github.com/users/rohitvincent/followers", "following_url": "https://api.github.com/users/rohitvincent/following{/other_user}", "gists_url": "https://api.github.com/users/rohitvincent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rohitvincent", "id": 2209134, "login": "rohitvincent", "node_id": "MDQ6VXNlcjIyMDkxMzQ=", "organizations_url": "https://api.github.com/users/rohitvincent/orgs", "received_events_url": "https://api.github.com/users/rohitvincent/received_events", "repos_url": "https://api.github.com/users/rohitvincent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rohitvincent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohitvincent/subscriptions", "type": "User", "url": "https://api.github.com/users/rohitvincent" }
[]
closed
false
null
[]
null
[ "Hi @rohitvincent, thanks for reporting.\r\n\r\nUnfortunately I'm not able to reproduce your issue:\r\n```python\r\nIn [2]: from datasets import load_dataset, DownloadMode\r\n ...: load_dataset(path='financial_phrasebank',name='sentences_allagree', download_mode=\"force_redownload\")\r\nDownloading builder script: 6.04kB [00:00, 2.87MB/s] \r\nDownloading metadata: 13.7kB [00:00, 7.24MB/s] \r\nDownloading and preparing dataset financial_phrasebank/sentences_allagree (download: 665.91 KiB, generated: 296.26 KiB, post-processed: Unknown size, total: 962.17 KiB) to .../.cache/huggingface/datasets/financial_phrasebank/sentences_allagree/1.0.0/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 682k/682k [00:00<00:00, 7.66MB/s]\r\nDataset financial_phrasebank downloaded and prepared to .../.cache/huggingface/datasets/financial_phrasebank/sentences_allagree/1.0.0/550bde12e6c30e2674da973a55f57edde5181d53f5a5a34c1531c53f93b7e141. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 918.80it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label'],\r\n num_rows: 2264\r\n })\r\n})\r\n```\r\n\r\nAre you able to access the link? https://www.researchgate.net/profile/Pekka-Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip", "Yes was able to download from the link manually. But still, get the same error when I use load_dataset.", "Fixed once data files are hosted on the Hub:\r\n- #4598" ]
"2022-07-21T08:43:32"
"2022-08-04T08:32:35"
"2022-08-04T08:32:35"
NONE
null
null
null
I tried both codes below to download the financial phrasebank dataset (https://huggingface.co/datasets/financial_phrasebank) with the sentences_allagree subset. However, the code gives a 403 error when executed from multiple machines locally or on the cloud. ``` from datasets import load_dataset, DownloadMode load_dataset(path='financial_phrasebank',name='sentences_allagree',download_mode=DownloadMode.FORCE_REDOWNLOAD) ``` ``` from datasets import load_dataset, DownloadMode load_dataset(path='financial_phrasebank',name='sentences_allagree') ``` **Error** ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4728/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4728/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2452/comments
https://api.github.com/repos/huggingface/datasets/issues/2452/events
https://github.com/huggingface/datasets/issues/2452
913,603,877
MDU6SXNzdWU5MTM2MDM4Nzc=
2,452
MRPC test set differences between torch and tensorflow datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/50372080?v=4", "events_url": "https://api.github.com/users/FredericOdermatt/events{/privacy}", "followers_url": "https://api.github.com/users/FredericOdermatt/followers", "following_url": "https://api.github.com/users/FredericOdermatt/following{/other_user}", "gists_url": "https://api.github.com/users/FredericOdermatt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredericOdermatt", "id": 50372080, "login": "FredericOdermatt", "node_id": "MDQ6VXNlcjUwMzcyMDgw", "organizations_url": "https://api.github.com/users/FredericOdermatt/orgs", "received_events_url": "https://api.github.com/users/FredericOdermatt/received_events", "repos_url": "https://api.github.com/users/FredericOdermatt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredericOdermatt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredericOdermatt/subscriptions", "type": "User", "url": "https://api.github.com/users/FredericOdermatt" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Realized that `tensorflow_datasets` is not provided by Huggingface and should therefore raise the issue there." ]
"2021-06-07T14:20:26"
"2021-06-07T14:34:32"
"2021-06-07T14:34:32"
NONE
null
null
null
## Describe the bug When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of importing the GLUE datasets. ## Steps to reproduce the bug Minimal working code ```python from datasets import load_dataset import tensorflow as tf import tensorflow_datasets # torch dataset = load_dataset("glue", "mrpc") # tf data = tensorflow_datasets.load('glue/{}'.format('mrpc')) data = list(data['test'].as_numpy_iterator()) for i in range(40,50): tf_sentence1 = data[i]['sentence1'].decode("utf-8") tf_sentence2 = data[i]['sentence2'].decode("utf-8") tf_label = data[i]['label'] index = data[i]['idx'] print('Index {}'.format(index)) torch_sentence1 = dataset['test']['sentence1'][index] torch_sentence2 = dataset['test']['sentence2'][index] torch_label = dataset['test']['label'][index] print('Tensorflow: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(tf_sentence1, tf_sentence2, tf_label)) print('Torch: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(torch_sentence1, torch_sentence2, torch_label)) ``` Sample output ``` Index 954 Tensorflow: Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws . Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws . Label -1 Torch: Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws . Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws . Label 1 Index 711 Tensorflow: Sentence1 Others keep records sealed for as little as five years or as much as 30 . Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years . Label -1 Torch: Sentence1 Others keep records sealed for as little as five years or as much as 30 . Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years . Label 0 ``` ## Expected results I would expect the datasets to be independent of whether I am working with torch or tensorflow. ## Actual results Test set labels are provided in the `datasets.load_datasets()` for MRPC. However MRPC is the only task where the test set labels are not -1. ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2452/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2452/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1363
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1363/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1363/comments
https://api.github.com/repos/huggingface/datasets/issues/1363/events
https://github.com/huggingface/datasets/pull/1363
760,160,944
MDExOlB1bGxSZXF1ZXN0NTM1MDM4NjM0
1,363
Adding OPUS MultiUN
{ "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patil-suraj", "id": 27137566, "login": "patil-suraj", "node_id": "MDQ6VXNlcjI3MTM3NTY2", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "repos_url": "https://api.github.com/users/patil-suraj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "type": "User", "url": "https://api.github.com/users/patil-suraj" }
[]
closed
false
null
[]
null
[]
"2020-12-09T09:29:01"
"2020-12-09T17:54:20"
"2020-12-09T17:54:20"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1363.diff", "html_url": "https://github.com/huggingface/datasets/pull/1363", "merged_at": "2020-12-09T17:54:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/1363.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1363" }
Adding UnMulti http://www.euromatrixplus.net/multi-un/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1363/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1363/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5171/comments
https://api.github.com/repos/huggingface/datasets/issues/5171/events
https://github.com/huggingface/datasets/pull/5171
1,425,355,111
PR_kwDODunzps5BpsXf
5,171
Add PB and TB in convert_file_size_to_int
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-10-27T09:50:31"
"2022-10-27T12:14:27"
"2022-10-27T12:12:30"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5171.diff", "html_url": "https://github.com/huggingface/datasets/pull/5171", "merged_at": "2022-10-27T12:12:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/5171.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5171" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5171/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5171/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4516/comments
https://api.github.com/repos/huggingface/datasets/issues/4516/events
https://github.com/huggingface/datasets/pull/4516
1,273,825,640
PR_kwDODunzps45ykYX
4,516
Fix hashing for python 3.9
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "What do you think @albertvillanova ?" ]
"2022-06-16T16:42:31"
"2022-06-28T13:33:46"
"2022-06-28T13:23:06"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4516.diff", "html_url": "https://github.com/huggingface/datasets/pull/4516", "merged_at": "2022-06-28T13:23:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/4516.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4516" }
In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function. Therefore the test at `tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9 To make hashing deterministic when the globals are not in the same order, we also need to make the order of `glob_ids` deterministic. Right now we don't have a CI to test python 3.9 but we should definitely have one. For this PR in particular I ran the tests locally using python 3.9 and they're passing now. Fix https://github.com/huggingface/datasets/issues/4506
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 5, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/4516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4516/timeline
null
null
true
Downloads last month
31
Edit dataset card