url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.11B
node_id
stringlengths
18
32
number
int64
1
3.59k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,643B
updated_at
int64
1,587B
1,643B
closed_at
int64
1,587B
1,643B
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1359/comments
https://api.github.com/repos/huggingface/datasets/issues/1359/events
https://github.com/huggingface/datasets/pull/1359
760,055,969
MDExOlB1bGxSZXF1ZXN0NTM0OTUxMTgy
1,359
Add JNLPBA
{ "login": "edugp", "id": 17855740, "node_id": "MDQ6VXNlcjE3ODU1NzQw", "avatar_url": "https://avatars.githubusercontent.com/u/17855740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edugp", "html_url": "https://github.com/edugp", "followers_url": "https://api.github.com/users/edugp/followers", "following_url": "https://api.github.com/users/edugp/following{/other_user}", "gists_url": "https://api.github.com/users/edugp/gists{/gist_id}", "starred_url": "https://api.github.com/users/edugp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edugp/subscriptions", "organizations_url": "https://api.github.com/users/edugp/orgs", "repos_url": "https://api.github.com/users/edugp/repos", "events_url": "https://api.github.com/users/edugp/events{/privacy}", "received_events_url": "https://api.github.com/users/edugp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,496,531,000
1,607,610,276,000
1,607,610,276,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1359", "html_url": "https://github.com/huggingface/datasets/pull/1359", "diff_url": "https://github.com/huggingface/datasets/pull/1359.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1359.patch", "merged_at": 1607610276000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1359/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1358/comments
https://api.github.com/repos/huggingface/datasets/issues/1358/events
https://github.com/huggingface/datasets/pull/1358
760,031,131
MDExOlB1bGxSZXF1ZXN0NTM0OTI5ODIx
1,358
Add spider dataset
{ "login": "olinguyen", "id": 4341867, "node_id": "MDQ6VXNlcjQzNDE4Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olinguyen", "html_url": "https://github.com/olinguyen", "followers_url": "https://api.github.com/users/olinguyen/followers", "following_url": "https://api.github.com/users/olinguyen/following{/other_user}", "gists_url": "https://api.github.com/users/olinguyen/gists{/gist_id}", "starred_url": "https://api.github.com/users/olinguyen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/olinguyen/subscriptions", "organizations_url": "https://api.github.com/users/olinguyen/orgs", "repos_url": "https://api.github.com/users/olinguyen/repos", "events_url": "https://api.github.com/users/olinguyen/events{/privacy}", "received_events_url": "https://api.github.com/users/olinguyen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,493,978,000
1,607,613,151,000
1,607,613,151,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1358", "html_url": "https://github.com/huggingface/datasets/pull/1358", "diff_url": "https://github.com/huggingface/datasets/pull/1358.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1358.patch", "merged_at": 1607613151000 }
This PR adds the Spider dataset, a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases. Dataset website: https://yale-lily.github.io/spider Paper link: https://www.aclweb.org/anthology/D18-1425/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1358/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1357
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1357/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1357/comments
https://api.github.com/repos/huggingface/datasets/issues/1357/events
https://github.com/huggingface/datasets/pull/1357
760,023,525
MDExOlB1bGxSZXF1ZXN0NTM0OTIzMzA4
1,357
Youtube caption corrections
{ "login": "2dot71mily", "id": 21292059, "node_id": "MDQ6VXNlcjIxMjkyMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/21292059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/2dot71mily", "html_url": "https://github.com/2dot71mily", "followers_url": "https://api.github.com/users/2dot71mily/followers", "following_url": "https://api.github.com/users/2dot71mily/following{/other_user}", "gists_url": "https://api.github.com/users/2dot71mily/gists{/gist_id}", "starred_url": "https://api.github.com/users/2dot71mily/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/2dot71mily/subscriptions", "organizations_url": "https://api.github.com/users/2dot71mily/orgs", "repos_url": "https://api.github.com/users/2dot71mily/repos", "events_url": "https://api.github.com/users/2dot71mily/events{/privacy}", "received_events_url": "https://api.github.com/users/2dot71mily/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sorry about forgetting flake8.\r\nRather than use up the circleci resources on a new push with only formatting changes, I will wait to push until the results from all tests finish and/or any feedback comes in... probably tomorrow for me.", "\r\nSo... my normal work is with mercurial and seem to have clearly forked this up using git... :(\r\n\r\nWhat I did is after calling:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/master\r\n```\r\n\r\nI then I attempt to pull in my most recent changes UI commit changes based on @lhoestq's feedback with:\r\n```\r\ngit pull\r\n``` \r\n... which I now suspect undid the above fetch and rebase. Will look into fixing later today when I have more time. Sorry!\r\n", "My dummy data seems quite large as a single row is composed of tokens/labels for an entire youtube video, with at least one row required for each file, which in this case 1 file per 13 youtube channels.\r\n\r\nTo make it smaller I passed `--n_lines 1` to reduce about 5x.\r\n\r\nI then manually reduced size of the particularly long youtube lectures to get the size to about 30KB. However, after recompressing into a zip, and running dummy data test I got the following error:\r\n`FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_youtube_caption_corrections - OSError: Cannot find data file. `, despite file being there, which I haven't had a chance yet to debug.", "I wrote a small script to generate a smaller json file for the dummy_data, with the hope that I could resolve the pytest error noted above (in case related to a manual typo I could have introduce), however the test contains to fail locally... here's to hoping it can pass on remote!", "Sorry for delayed comments here. Last commit made two changes:\r\n- Increased the valency of the labels from just True/False to more categories to describe the various types of diffs encountered. This required some rewrite of the README\r\n- Reduced the number of remote files to be downloaded from 13 to 4, by combining all 13 of the channel-specific files together, and the splitting them up in a way to meet Github file size requirements. This also reduces size of the dummy-data.", "@lhoestq, thank you for the great feedback, especially given how busy you guys are now! \r\n\r\nI checked out GitHub release tags and looks cool. I have added the version tag to the url, instead of the commit sha as originally suggested, with the hope that it serves the same purpose of pinning the content to this url. Please let me know if I have misunderstood.\r\n\r\nIn regard to dynamically changing the number of files downloaded by first downloading a JSON listing the files, I love that idea. But I am a little confused, as I was thinking that any changes to the dataset itself would require a new PR with an updated `dataset_infos.json`, e.g. `num_examples` would increase. \r\n\r\nIf the purpose of this is not to permit dynamic (without a PR needed) growth of the number of files, but instead to provide stability to the consumers of the dataset, maybe I continued use the release tags, maintaining access to old releases could serve this purpose? I am still learning about these release tags... ", "For dynamic datasets, i.e. datasets that evolve over time, we support custom configurations: they are configurations that are not part of the BUILDER_CONFIGS or in the dataset_infos.json\r\n\r\nFor example for wikipedia, you can use the latest wiki dump by specifying `date=` inside `load_dataset()`. A configuration is created on the fly for this date and is used to build the dataset using the latest data.\r\n\r\nTherefore we don't need to have PRs to update the script for each wikipedia release.\r\n\r\nOne downside though is that we don't have metadata in advance such as the size of the dataset.\r\n\r\nI think this could be a nice addition for the youtube caption dataset in the future to be have a system of releases and be able to load the version we want easily. What do you think ?", "\r\n\r\n\r\n\r\n> For dynamic datasets, i.e. datasets that evolve over time, we support custom configurations: they are configurations that are not part of the BUILDER_CONFIGS or in the dataset_infos.json\r\n> \r\n \r\n> I think this could be a nice addition for the youtube caption dataset in the future to be have a system of releases and be able to load the version we want easily. What do you think ?\r\n\r\nThank you for the suggestion! This sounds great! I will take a look at the some datasets that do this, and would love to give it a try in the future, if I continue to grow the captions dataset in a meaningful way. \r\n\r\nAppreciate all the help on this. It has been a really great experience for me. :)", "Excited to merge! And sorry to be such a github n00b, but from what I've quickly read, I don't 'Close pull request', but rather the next steps are action taken on your end... Please let me know if there is some action to be taken at my end first. :/", "Alright merging this one then :) " ]
1,607,493,154,000
1,608,055,976,000
1,608,055,976,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1357", "html_url": "https://github.com/huggingface/datasets/pull/1357", "diff_url": "https://github.com/huggingface/datasets/pull/1357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1357.patch", "merged_at": 1608055976000 }
This PR adds a new dataset of YouTube captions, error and corrections. This dataset was created in just the last week, as inspired by this sprint!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1357/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1356/comments
https://api.github.com/repos/huggingface/datasets/issues/1356/events
https://github.com/huggingface/datasets/pull/1356
759,994,457
MDExOlB1bGxSZXF1ZXN0NTM0ODk3OTQ1
1,356
Add StackOverflow StackSample dataset
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Thanks for the review and suggestions! I've added your comments and pushed the changes. I'm having issues with the dummy data still. When I run the dummy data test\r\n\r\n```bash\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_so_stacksample\r\n```\r\nI get this error: \r\n\r\n```\r\n___________________________________________ LocalDatasetTest.test_load_dataset_all_configs_so_stacksample ____________________________________________\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_so_stacksample>, dataset_name = 'so_stacksample'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:237: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_so_stacksample - AssertionError: False is not true\r\n```\r\n\r\nI tried formatting the data similar to other datasets, but I think I don't have my csv's in the zip folder with the proper name. I also ran the command that's supposed to outline the exact steps I need to perform to get them into the correct format, but I followed them and they don't seem to be working still :/. Any help would be greatly appreciated!\r\n", "Ok I found the issue with the dummy data.\r\nIt's currently failing because it's not generating a single example using the dummy csv file.\r\nThat's because there's only only line in the dummy csv file, and this line is skipped using the `next()` call used to ignore the headers of the csv.\r\n\r\nTo fix the dummy data you must add headers to the dummy csv files.", "Also can you make sure that all the original CSV files have headers ? i.e. check that their first line is just the column names", "> Ok I found the issue with the dummy data.\r\n> It's currently failing because it's not generating a single example using the dummy csv file.\r\n> That's because there's only only line in the dummy csv file, and this line is skipped using the `next()` call used to ignore the headers of the csv.\r\n> \r\n> To fix the dummy data you must add headers to the dummy csv files.\r\n\r\nOh man, I bamboozled myself! Thank you @lhoestq for catching that! I've updated the dummy csv's to include headers and also confirmed that they all have headers, so I am not throwing away any information with that `next()` call. When I run the test locally for the dummy data it passes, so hopefully it is good to go :D", "merging since the Ci is fixed on master" ]
1,607,489,991,000
1,608,562,101,000
1,608,562,101,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1356", "html_url": "https://github.com/huggingface/datasets/pull/1356", "diff_url": "https://github.com/huggingface/datasets/pull/1356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1356.patch", "merged_at": 1608562101000 }
This PR adds the StackOverflow StackSample dataset from Kaggle: https://www.kaggle.com/stackoverflow/stacksample Ran through all of the steps. However, since my dataset requires manually downloading the data, I was unable to run the pytest on the real dataset (the dummy data pytest passed).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1356/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1355/comments
https://api.github.com/repos/huggingface/datasets/issues/1355/events
https://github.com/huggingface/datasets/pull/1355
759,994,208
MDExOlB1bGxSZXF1ZXN0NTM0ODk3NzQw
1,355
Addition of py_ast dataset
{ "login": "reshinthadithyan", "id": 36307201, "node_id": "MDQ6VXNlcjM2MzA3MjAx", "avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reshinthadithyan", "html_url": "https://github.com/reshinthadithyan", "followers_url": "https://api.github.com/users/reshinthadithyan/followers", "following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}", "gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions", "organizations_url": "https://api.github.com/users/reshinthadithyan/orgs", "repos_url": "https://api.github.com/users/reshinthadithyan/repos", "events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}", "received_events_url": "https://api.github.com/users/reshinthadithyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,489,957,000
1,607,530,789,000
1,607,530,788,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1355", "html_url": "https://github.com/huggingface/datasets/pull/1355", "diff_url": "https://github.com/huggingface/datasets/pull/1355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1355.patch", "merged_at": 1607530788000 }
@lhoestq as discussed in PR #1195
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1355/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1354/comments
https://api.github.com/repos/huggingface/datasets/issues/1354/events
https://github.com/huggingface/datasets/pull/1354
759,987,763
MDExOlB1bGxSZXF1ZXN0NTM0ODkyMzE2
1,354
Add TweetQA dataset
{ "login": "anaerobeth", "id": 3663322, "node_id": "MDQ6VXNlcjM2NjMzMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/3663322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anaerobeth", "html_url": "https://github.com/anaerobeth", "followers_url": "https://api.github.com/users/anaerobeth/followers", "following_url": "https://api.github.com/users/anaerobeth/following{/other_user}", "gists_url": "https://api.github.com/users/anaerobeth/gists{/gist_id}", "starred_url": "https://api.github.com/users/anaerobeth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anaerobeth/subscriptions", "organizations_url": "https://api.github.com/users/anaerobeth/orgs", "repos_url": "https://api.github.com/users/anaerobeth/repos", "events_url": "https://api.github.com/users/anaerobeth/events{/privacy}", "received_events_url": "https://api.github.com/users/anaerobeth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,489,041,000
1,607,613,030,000
1,607,613,030,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1354", "html_url": "https://github.com/huggingface/datasets/pull/1354", "diff_url": "https://github.com/huggingface/datasets/pull/1354.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1354.patch", "merged_at": 1607613030000 }
This PR adds the TweetQA dataset, the first dataset for QA on social media data by leveraging news media and crowdsourcing. Paper: https://arxiv.org/abs/1907.06292 Repository: https://tweetqa.github.io/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1354/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1353/comments
https://api.github.com/repos/huggingface/datasets/issues/1353/events
https://github.com/huggingface/datasets/pull/1353
759,980,004
MDExOlB1bGxSZXF1ZXN0NTM0ODg2MDk4
1,353
New instruction for how to generate dataset_infos.json
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,487,880,000
1,607,607,915,000
1,607,607,915,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1353", "html_url": "https://github.com/huggingface/datasets/pull/1353", "diff_url": "https://github.com/huggingface/datasets/pull/1353.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1353.patch", "merged_at": 1607607915000 }
Add additional instructions for how to generate dataset_infos.json for manual download datasets. Information courtesy of `Taimur Ibrahim` from the slack channel
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1353/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1352/comments
https://api.github.com/repos/huggingface/datasets/issues/1352/events
https://github.com/huggingface/datasets/pull/1352
759,978,543
MDExOlB1bGxSZXF1ZXN0NTM0ODg0ODg4
1,352
change url for prachathai67k to internet archive
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,487,637,000
1,607,607,737,000
1,607,607,737,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1352", "html_url": "https://github.com/huggingface/datasets/pull/1352", "diff_url": "https://github.com/huggingface/datasets/pull/1352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1352.patch", "merged_at": 1607607737000 }
`prachathai67k` is currently downloaded from git-lfs of PyThaiNLP github. Since the size is quite large (~250MB), I moved the URL to archive.org in order to prevent rate limit issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1352/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1351/comments
https://api.github.com/repos/huggingface/datasets/issues/1351/events
https://github.com/huggingface/datasets/pull/1351
759,902,770
MDExOlB1bGxSZXF1ZXN0NTM0ODI0NTcw
1,351
added craigslist_bargians
{ "login": "ZacharySBrown", "id": 7950786, "node_id": "MDQ6VXNlcjc5NTA3ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/7950786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZacharySBrown", "html_url": "https://github.com/ZacharySBrown", "followers_url": "https://api.github.com/users/ZacharySBrown/followers", "following_url": "https://api.github.com/users/ZacharySBrown/following{/other_user}", "gists_url": "https://api.github.com/users/ZacharySBrown/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZacharySBrown/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZacharySBrown/subscriptions", "organizations_url": "https://api.github.com/users/ZacharySBrown/orgs", "repos_url": "https://api.github.com/users/ZacharySBrown/repos", "events_url": "https://api.github.com/users/ZacharySBrown/events{/privacy}", "received_events_url": "https://api.github.com/users/ZacharySBrown/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,475,751,000
1,607,609,674,000
1,607,609,674,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1351", "html_url": "https://github.com/huggingface/datasets/pull/1351", "diff_url": "https://github.com/huggingface/datasets/pull/1351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1351.patch", "merged_at": 1607609674000 }
`craigslist_bargains` data set from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/) (Cleaned up version of #1278)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1351/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1350/comments
https://api.github.com/repos/huggingface/datasets/issues/1350/events
https://github.com/huggingface/datasets/pull/1350
759,879,789
MDExOlB1bGxSZXF1ZXN0NTM0ODA1OTY3
1,350
add LeNER-Br dataset
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.github.com/users/jonatasgrosman/followers", "following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}", "gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions", "organizations_url": "https://api.github.com/users/jonatasgrosman/orgs", "repos_url": "https://api.github.com/users/jonatasgrosman/repos", "events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}", "received_events_url": "https://api.github.com/users/jonatasgrosman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I don't know what happened, my first commit passed on all checks, but after just a README.md update one of the scripts failed, is it normal? 😕 ", "Looks like a flaky connection error, I've launched a re-run, it should be fine :)", "The RemoteDatasetTest error in the CI is just a connection error, we can ignore it", "merging since the CI is fixed on master" ]
1,607,472,398,000
1,607,609,493,000
1,607,609,493,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1350", "html_url": "https://github.com/huggingface/datasets/pull/1350", "diff_url": "https://github.com/huggingface/datasets/pull/1350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1350.patch", "merged_at": 1607609493000 }
Adding the LeNER-Br dataset, a Portuguese language dataset for named entity recognition
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1350/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1349/comments
https://api.github.com/repos/huggingface/datasets/issues/1349/events
https://github.com/huggingface/datasets/pull/1349
759,870,664
MDExOlB1bGxSZXF1ZXN0NTM0Nzk4NDQ3
1,349
initial commit for MultiReQA
{ "login": "Karthik-Bhaskar", "id": 13200370, "node_id": "MDQ6VXNlcjEzMjAwMzcw", "avatar_url": "https://avatars.githubusercontent.com/u/13200370?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Karthik-Bhaskar", "html_url": "https://github.com/Karthik-Bhaskar", "followers_url": "https://api.github.com/users/Karthik-Bhaskar/followers", "following_url": "https://api.github.com/users/Karthik-Bhaskar/following{/other_user}", "gists_url": "https://api.github.com/users/Karthik-Bhaskar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Karthik-Bhaskar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Karthik-Bhaskar/subscriptions", "organizations_url": "https://api.github.com/users/Karthik-Bhaskar/orgs", "repos_url": "https://api.github.com/users/Karthik-Bhaskar/repos", "events_url": "https://api.github.com/users/Karthik-Bhaskar/events{/privacy}", "received_events_url": "https://api.github.com/users/Karthik-Bhaskar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like this dataset includes changes about many other files than the ones for multi_re_qa\r\n\r\nCan you create another branch and another PR please ?", "> looks like this dataset includes changes about many other files than the ones for multi_re_qa\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\nSure I will do that. Thank you." ]
1,607,471,074,000
1,607,532,397,000
1,607,532,397,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1349", "html_url": "https://github.com/huggingface/datasets/pull/1349", "diff_url": "https://github.com/huggingface/datasets/pull/1349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1349.patch", "merged_at": null }
Added MultiReQA, which is a dataset containing the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1349/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1348/comments
https://api.github.com/repos/huggingface/datasets/issues/1348/events
https://github.com/huggingface/datasets/pull/1348
759,869,849
MDExOlB1bGxSZXF1ZXN0NTM0Nzk3Nzcy
1,348
add Yoruba NER dataset
{ "login": "dadelani", "id": 23586676, "node_id": "MDQ6VXNlcjIzNTg2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dadelani", "html_url": "https://github.com/dadelani", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "organizations_url": "https://api.github.com/users/dadelani/orgs", "repos_url": "https://api.github.com/users/dadelani/repos", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "received_events_url": "https://api.github.com/users/dadelani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you. Okay, other pull requests only have one dataset", "The `RemoteDatasetTest` error in the CI is just a connection error, we can ignore it", "merging since the CI is fixed on master", "Thank you very much" ]
1,607,470,955,000
1,607,610,625,000
1,607,609,383,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1348", "html_url": "https://github.com/huggingface/datasets/pull/1348", "diff_url": "https://github.com/huggingface/datasets/pull/1348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1348.patch", "merged_at": 1607609383000 }
Added Yoruba GV dataset based on this paper
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1348/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1347/comments
https://api.github.com/repos/huggingface/datasets/issues/1347/events
https://github.com/huggingface/datasets/pull/1347
759,845,231
MDExOlB1bGxSZXF1ZXN0NTM0Nzc3NjQ0
1,347
Add spanish billion words corpus
{ "login": "mariagrandury", "id": 57645283, "node_id": "MDQ6VXNlcjU3NjQ1Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/57645283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariagrandury", "html_url": "https://github.com/mariagrandury", "followers_url": "https://api.github.com/users/mariagrandury/followers", "following_url": "https://api.github.com/users/mariagrandury/following{/other_user}", "gists_url": "https://api.github.com/users/mariagrandury/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariagrandury/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariagrandury/subscriptions", "organizations_url": "https://api.github.com/users/mariagrandury/orgs", "repos_url": "https://api.github.com/users/mariagrandury/repos", "events_url": "https://api.github.com/users/mariagrandury/events{/privacy}", "received_events_url": "https://api.github.com/users/mariagrandury/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you for your feedback! I've reduced the dummy data size to 2KB.\r\n\r\nI had to rebase to fix `RemoteDatasetTest` fails, sorry about the 80 commits. \r\nI could create a new clean PR if you prefer.", "I have seen that in similar cases you have suggested to other contributors to create another branch and another PR, so I will do that.", "Yes thank you !", "The new PR is #1476 :)" ]
1,607,467,898,000
1,607,685,999,000
1,607,685,328,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1347", "html_url": "https://github.com/huggingface/datasets/pull/1347", "diff_url": "https://github.com/huggingface/datasets/pull/1347.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1347.patch", "merged_at": null }
Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1347/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1347/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1346/comments
https://api.github.com/repos/huggingface/datasets/issues/1346/events
https://github.com/huggingface/datasets/pull/1346
759,844,137
MDExOlB1bGxSZXF1ZXN0NTM0Nzc2ODE5
1,346
Add MultiBooked dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "There' still an issue with the dummy data, let me take a look" ]
1,607,467,776,000
1,608,051,729,000
1,608,051,729,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1346", "html_url": "https://github.com/huggingface/datasets/pull/1346", "diff_url": "https://github.com/huggingface/datasets/pull/1346.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1346.patch", "merged_at": 1608051728000 }
Add dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1346/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1345/comments
https://api.github.com/repos/huggingface/datasets/issues/1345/events
https://github.com/huggingface/datasets/pull/1345
759,835,486
MDExOlB1bGxSZXF1ZXN0NTM0NzY5NzMw
1,345
First commit of NarrativeQA Dataset
{ "login": "rsanjaykamath", "id": 18527321, "node_id": "MDQ6VXNlcjE4NTI3MzIx", "avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rsanjaykamath", "html_url": "https://github.com/rsanjaykamath", "followers_url": "https://api.github.com/users/rsanjaykamath/followers", "following_url": "https://api.github.com/users/rsanjaykamath/following{/other_user}", "gists_url": "https://api.github.com/users/rsanjaykamath/gists{/gist_id}", "starred_url": "https://api.github.com/users/rsanjaykamath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rsanjaykamath/subscriptions", "organizations_url": "https://api.github.com/users/rsanjaykamath/orgs", "repos_url": "https://api.github.com/users/rsanjaykamath/repos", "events_url": "https://api.github.com/users/rsanjaykamath/events{/privacy}", "received_events_url": "https://api.github.com/users/rsanjaykamath/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,466,719,000
1,611,588,712,000
1,607,506,192,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1345", "html_url": "https://github.com/huggingface/datasets/pull/1345", "diff_url": "https://github.com/huggingface/datasets/pull/1345.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1345.patch", "merged_at": null }
Added NarrativeQA dataset and included a manual downloading option to download scripts from the original scripts provided by the authors.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1345/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1344/comments
https://api.github.com/repos/huggingface/datasets/issues/1344/events
https://github.com/huggingface/datasets/pull/1344
759,831,925
MDExOlB1bGxSZXF1ZXN0NTM0NzY2ODIy
1,344
Add hausa ner corpus
{ "login": "dadelani", "id": 23586676, "node_id": "MDQ6VXNlcjIzNTg2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dadelani", "html_url": "https://github.com/dadelani", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "organizations_url": "https://api.github.com/users/dadelani/orgs", "repos_url": "https://api.github.com/users/dadelani/repos", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "received_events_url": "https://api.github.com/users/dadelani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,466,304,000
1,607,469,115,000
1,607,469,115,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1344", "html_url": "https://github.com/huggingface/datasets/pull/1344", "diff_url": "https://github.com/huggingface/datasets/pull/1344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1344.patch", "merged_at": null }
Added Hausa VOA NER data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1344/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1343/comments
https://api.github.com/repos/huggingface/datasets/issues/1343/events
https://github.com/huggingface/datasets/pull/1343
759,809,999
MDExOlB1bGxSZXF1ZXN0NTM0NzQ4NTE4
1,343
Add LiveQA
{ "login": "j-chim", "id": 22435209, "node_id": "MDQ6VXNlcjIyNDM1MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/22435209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j-chim", "html_url": "https://github.com/j-chim", "followers_url": "https://api.github.com/users/j-chim/followers", "following_url": "https://api.github.com/users/j-chim/following{/other_user}", "gists_url": "https://api.github.com/users/j-chim/gists{/gist_id}", "starred_url": "https://api.github.com/users/j-chim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j-chim/subscriptions", "organizations_url": "https://api.github.com/users/j-chim/orgs", "repos_url": "https://api.github.com/users/j-chim/repos", "events_url": "https://api.github.com/users/j-chim/events{/privacy}", "received_events_url": "https://api.github.com/users/j-chim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,464,356,000
1,607,938,828,000
1,607,938,828,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1343", "html_url": "https://github.com/huggingface/datasets/pull/1343", "diff_url": "https://github.com/huggingface/datasets/pull/1343.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1343.patch", "merged_at": 1607938828000 }
This PR adds LiveQA, the Chinese real-time/timeline-based QA task by [Liu et al., 2020](https://arxiv.org/pdf/2010.00526.pdf).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1343/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1342/comments
https://api.github.com/repos/huggingface/datasets/issues/1342/events
https://github.com/huggingface/datasets/pull/1342
759,794,121
MDExOlB1bGxSZXF1ZXN0NTM0NzM1MzAw
1,342
[yaml] Fix metadata according to pre-specified scheme
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,462,794,000
1,607,528,247,000
1,607,528,246,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1342", "html_url": "https://github.com/huggingface/datasets/pull/1342", "diff_url": "https://github.com/huggingface/datasets/pull/1342.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1342.patch", "merged_at": 1607528246000 }
@lhoestq @yjernite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1342/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1341/comments
https://api.github.com/repos/huggingface/datasets/issues/1341/events
https://github.com/huggingface/datasets/pull/1341
759,784,557
MDExOlB1bGxSZXF1ZXN0NTM0NzI3MzU5
1,341
added references to only data card creator to all guides
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,461,871,000
1,607,463,372,000
1,607,463,371,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1341", "html_url": "https://github.com/huggingface/datasets/pull/1341", "diff_url": "https://github.com/huggingface/datasets/pull/1341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1341.patch", "merged_at": 1607463371000 }
We can now use the wonderful online form for dataset cards created by @evrardts
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1341/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1340/comments
https://api.github.com/repos/huggingface/datasets/issues/1340/events
https://github.com/huggingface/datasets/pull/1340
759,765,408
MDExOlB1bGxSZXF1ZXN0NTM0NzExMjc5
1,340
:fist: ¡Viva la Independencia!
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I've added the changes / fixes - ready for a second pass :)" ]
1,607,460,223,000
1,607,942,161,000
1,607,942,161,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1340", "html_url": "https://github.com/huggingface/datasets/pull/1340", "diff_url": "https://github.com/huggingface/datasets/pull/1340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1340.patch", "merged_at": 1607942161000 }
Adds the Catalonia Independence Corpus for stance-detection of Tweets. Ready for review!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1340/reactions", "total_count": 8, "+1": 0, "-1": 0, "laugh": 4, "hooray": 3, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1340/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1339
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1339/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1339/comments
https://api.github.com/repos/huggingface/datasets/issues/1339/events
https://github.com/huggingface/datasets/pull/1339
759,744,088
MDExOlB1bGxSZXF1ZXN0NTM0Njk0NDI4
1,339
hate_speech_18 initial commit
{ "login": "czabo", "id": 75574105, "node_id": "MDQ6VXNlcjc1NTc0MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/75574105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/czabo", "html_url": "https://github.com/czabo", "followers_url": "https://api.github.com/users/czabo/followers", "following_url": "https://api.github.com/users/czabo/following{/other_user}", "gists_url": "https://api.github.com/users/czabo/gists{/gist_id}", "starred_url": "https://api.github.com/users/czabo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/czabo/subscriptions", "organizations_url": "https://api.github.com/users/czabo/orgs", "repos_url": "https://api.github.com/users/czabo/repos", "events_url": "https://api.github.com/users/czabo/events{/privacy}", "received_events_url": "https://api.github.com/users/czabo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Nice thanks !\r\n> \r\n> Can you rename the dataset folder and the dataset script name `hate_speech18` instead of `hate_speech_18` to follow the snake case convention we're using ?\r\n> \r\n> Also it looks like the dummy_data.zip file is quite big (almost 4MB).\r\n> Can you try to reduce its size ?\r\n> \r\n> To do so feel free to take a look inside it and remove all the unnecessary files or chunks of texts. The idea is to only keep a few examples\r\n\r\nDone, thanks! ", "Re-opened in https://github.com/huggingface/datasets/pull/1486" ]
1,607,458,208,000
1,607,789,852,000
1,607,789,852,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1339", "html_url": "https://github.com/huggingface/datasets/pull/1339", "diff_url": "https://github.com/huggingface/datasets/pull/1339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1339.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1339/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1338/comments
https://api.github.com/repos/huggingface/datasets/issues/1338/events
https://github.com/huggingface/datasets/pull/1338
759,725,770
MDExOlB1bGxSZXF1ZXN0NTM0Njc5ODcz
1,338
Add GigaFren Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq fixed" ]
1,607,456,524,000
1,607,940,227,000
1,607,940,226,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1338", "html_url": "https://github.com/huggingface/datasets/pull/1338", "diff_url": "https://github.com/huggingface/datasets/pull/1338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1338.patch", "merged_at": 1607940226000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1338/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1337/comments
https://api.github.com/repos/huggingface/datasets/issues/1337/events
https://github.com/huggingface/datasets/pull/1337
759,710,482
MDExOlB1bGxSZXF1ZXN0NTM0NjY3NDUz
1,337
Add spanish billion words
{ "login": "mariagrandury", "id": 57645283, "node_id": "MDQ6VXNlcjU3NjQ1Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/57645283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariagrandury", "html_url": "https://github.com/mariagrandury", "followers_url": "https://api.github.com/users/mariagrandury/followers", "following_url": "https://api.github.com/users/mariagrandury/following{/other_user}", "gists_url": "https://api.github.com/users/mariagrandury/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariagrandury/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariagrandury/subscriptions", "organizations_url": "https://api.github.com/users/mariagrandury/orgs", "repos_url": "https://api.github.com/users/mariagrandury/repos", "events_url": "https://api.github.com/users/mariagrandury/events{/privacy}", "received_events_url": "https://api.github.com/users/mariagrandury/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The tests failed because of ```RemoteDatasetTest``` so I tried ```git rebase``` and messed everything up. I've made a new clean PR (#1347)." ]
1,607,455,082,000
1,607,468,378,000
1,607,462,127,000
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1337", "html_url": "https://github.com/huggingface/datasets/pull/1337", "diff_url": "https://github.com/huggingface/datasets/pull/1337.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1337.patch", "merged_at": null }
Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web. The dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1337/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1336/comments
https://api.github.com/repos/huggingface/datasets/issues/1336/events
https://github.com/huggingface/datasets/pull/1336
759,706,932
MDExOlB1bGxSZXF1ZXN0NTM0NjY0NjIw
1,336
Add dataset Yoruba BBC Topic Classification
{ "login": "michael-aloys", "id": 1858628, "node_id": "MDQ6VXNlcjE4NTg2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/1858628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michael-aloys", "html_url": "https://github.com/michael-aloys", "followers_url": "https://api.github.com/users/michael-aloys/followers", "following_url": "https://api.github.com/users/michael-aloys/following{/other_user}", "gists_url": "https://api.github.com/users/michael-aloys/gists{/gist_id}", "starred_url": "https://api.github.com/users/michael-aloys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michael-aloys/subscriptions", "organizations_url": "https://api.github.com/users/michael-aloys/orgs", "repos_url": "https://api.github.com/users/michael-aloys/repos", "events_url": "https://api.github.com/users/michael-aloys/events{/privacy}", "received_events_url": "https://api.github.com/users/michael-aloys/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,454,738,000
1,607,599,661,000
1,607,599,661,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1336", "html_url": "https://github.com/huggingface/datasets/pull/1336", "diff_url": "https://github.com/huggingface/datasets/pull/1336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1336.patch", "merged_at": 1607599661000 }
Added new dataset Yoruba BBC Topic Classification Contains loading script as well as dataset card including YAML tags.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1336/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1335/comments
https://api.github.com/repos/huggingface/datasets/issues/1335/events
https://github.com/huggingface/datasets/pull/1335
759,705,835
MDExOlB1bGxSZXF1ZXN0NTM0NjYzNzQ2
1,335
Added Bianet dataset
{ "login": "param087", "id": 26374564, "node_id": "MDQ6VXNlcjI2Mzc0NTY0", "avatar_url": "https://avatars.githubusercontent.com/u/26374564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/param087", "html_url": "https://github.com/param087", "followers_url": "https://api.github.com/users/param087/followers", "following_url": "https://api.github.com/users/param087/following{/other_user}", "gists_url": "https://api.github.com/users/param087/gists{/gist_id}", "starred_url": "https://api.github.com/users/param087/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/param087/subscriptions", "organizations_url": "https://api.github.com/users/param087/orgs", "repos_url": "https://api.github.com/users/param087/repos", "events_url": "https://api.github.com/users/param087/events{/privacy}", "received_events_url": "https://api.github.com/users/param087/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the Ci is fixed on master" ]
1,607,454,632,000
1,607,940,056,000
1,607,940,056,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1335", "html_url": "https://github.com/huggingface/datasets/pull/1335", "diff_url": "https://github.com/huggingface/datasets/pull/1335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1335.patch", "merged_at": 1607940055000 }
Hi :hugs:, This is a PR for [Bianet: A parallel news corpus in Turkish, Kurdish and English; Source](http://opus.nlpl.eu/Bianet.php) dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1335/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1334/comments
https://api.github.com/repos/huggingface/datasets/issues/1334/events
https://github.com/huggingface/datasets/pull/1334
759,699,993
MDExOlB1bGxSZXF1ZXN0NTM0NjU5MDg2
1,334
Add QED Amara Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,454,073,000
1,607,599,045,000
1,607,598,957,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1334", "html_url": "https://github.com/huggingface/datasets/pull/1334", "diff_url": "https://github.com/huggingface/datasets/pull/1334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1334.patch", "merged_at": 1607598957000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1334/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1333/comments
https://api.github.com/repos/huggingface/datasets/issues/1333/events
https://github.com/huggingface/datasets/pull/1333
759,687,836
MDExOlB1bGxSZXF1ZXN0NTM0NjQ4OTI4
1,333
Add Tanzil Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,453,115,000
1,607,599,076,000
1,607,598,883,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1333", "html_url": "https://github.com/huggingface/datasets/pull/1333", "diff_url": "https://github.com/huggingface/datasets/pull/1333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1333.patch", "merged_at": 1607598883000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1333/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1332/comments
https://api.github.com/repos/huggingface/datasets/issues/1332/events
https://github.com/huggingface/datasets/pull/1332
759,679,135
MDExOlB1bGxSZXF1ZXN0NTM0NjQxOTE5
1,332
Add Open Subtitles Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,452,305,000
1,607,599,058,000
1,607,598,798,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1332", "html_url": "https://github.com/huggingface/datasets/pull/1332", "diff_url": "https://github.com/huggingface/datasets/pull/1332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1332.patch", "merged_at": 1607598798000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1332/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1331/comments
https://api.github.com/repos/huggingface/datasets/issues/1331/events
https://github.com/huggingface/datasets/pull/1331
759,677,189
MDExOlB1bGxSZXF1ZXN0NTM0NjQwMzc5
1,331
First version of the new dataset hausa_voa_topics
{ "login": "michael-aloys", "id": 1858628, "node_id": "MDQ6VXNlcjE4NTg2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/1858628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michael-aloys", "html_url": "https://github.com/michael-aloys", "followers_url": "https://api.github.com/users/michael-aloys/followers", "following_url": "https://api.github.com/users/michael-aloys/following{/other_user}", "gists_url": "https://api.github.com/users/michael-aloys/gists{/gist_id}", "starred_url": "https://api.github.com/users/michael-aloys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michael-aloys/subscriptions", "organizations_url": "https://api.github.com/users/michael-aloys/orgs", "repos_url": "https://api.github.com/users/michael-aloys/repos", "events_url": "https://api.github.com/users/michael-aloys/events{/privacy}", "received_events_url": "https://api.github.com/users/michael-aloys/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,452,132,000
1,607,598,593,000
1,607,598,593,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1331", "html_url": "https://github.com/huggingface/datasets/pull/1331", "diff_url": "https://github.com/huggingface/datasets/pull/1331.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1331.patch", "merged_at": 1607598593000 }
Contains loading script as well as dataset card including YAML tags.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1331/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1330/comments
https://api.github.com/repos/huggingface/datasets/issues/1330/events
https://github.com/huggingface/datasets/pull/1330
759,657,324
MDExOlB1bGxSZXF1ZXN0NTM0NjI0MzMx
1,330
added un_ga dataset
{ "login": "param087", "id": 26374564, "node_id": "MDQ6VXNlcjI2Mzc0NTY0", "avatar_url": "https://avatars.githubusercontent.com/u/26374564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/param087", "html_url": "https://github.com/param087", "followers_url": "https://api.github.com/users/param087/followers", "following_url": "https://api.github.com/users/param087/following{/other_user}", "gists_url": "https://api.github.com/users/param087/gists{/gist_id}", "starred_url": "https://api.github.com/users/param087/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/param087/subscriptions", "organizations_url": "https://api.github.com/users/param087/orgs", "repos_url": "https://api.github.com/users/param087/repos", "events_url": "https://api.github.com/users/param087/events{/privacy}", "received_events_url": "https://api.github.com/users/param087/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks like this PR includes changes about many other files than the ones for un_ga\r\n\r\nCan you create another branch an another PR please ?", "@lhoestq, Thank you for suggestions. I have made the changes and raised the new PR https://github.com/huggingface/datasets/pull/1569. " ]
1,607,450,318,000
1,607,968,354,000
1,607,968,354,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1330", "html_url": "https://github.com/huggingface/datasets/pull/1330", "diff_url": "https://github.com/huggingface/datasets/pull/1330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1330.patch", "merged_at": null }
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1330/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1329/comments
https://api.github.com/repos/huggingface/datasets/issues/1329/events
https://github.com/huggingface/datasets/pull/1329
759,654,174
MDExOlB1bGxSZXF1ZXN0NTM0NjIxNzg0
1,329
Add yoruba ner corpus
{ "login": "dadelani", "id": 23586676, "node_id": "MDQ6VXNlcjIzNTg2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dadelani", "html_url": "https://github.com/dadelani", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "organizations_url": "https://api.github.com/users/dadelani/orgs", "repos_url": "https://api.github.com/users/dadelani/repos", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "received_events_url": "https://api.github.com/users/dadelani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,450,040,000
1,607,469,072,000
1,607,469,072,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1329", "html_url": "https://github.com/huggingface/datasets/pull/1329", "diff_url": "https://github.com/huggingface/datasets/pull/1329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1329.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1329/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1328/comments
https://api.github.com/repos/huggingface/datasets/issues/1328/events
https://github.com/huggingface/datasets/pull/1328
759,634,907
MDExOlB1bGxSZXF1ZXN0NTM0NjA2MDM1
1,328
Added the NewsPH Raw dataset and corresponding dataset card
{ "login": "jcblaisecruz02", "id": 24757547, "node_id": "MDQ6VXNlcjI0NzU3NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcblaisecruz02", "html_url": "https://github.com/jcblaisecruz02", "followers_url": "https://api.github.com/users/jcblaisecruz02/followers", "following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}", "gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions", "organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs", "repos_url": "https://api.github.com/users/jcblaisecruz02/repos", "events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}", "received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,448,345,000
1,607,598,274,000
1,607,598,274,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1328", "html_url": "https://github.com/huggingface/datasets/pull/1328", "diff_url": "https://github.com/huggingface/datasets/pull/1328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1328.patch", "merged_at": 1607598274000 }
This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1328/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1327/comments
https://api.github.com/repos/huggingface/datasets/issues/1327/events
https://github.com/huggingface/datasets/pull/1327
759,629,321
MDExOlB1bGxSZXF1ZXN0NTM0NjAxNDM3
1,327
Add msr_genomics_kbcomp dataset
{ "login": "manandey", "id": 6687858, "node_id": "MDQ6VXNlcjY2ODc4NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manandey", "html_url": "https://github.com/manandey", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "organizations_url": "https://api.github.com/users/manandey/orgs", "repos_url": "https://api.github.com/users/manandey/repos", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "received_events_url": "https://api.github.com/users/manandey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,447,900,000
1,607,451,512,000
1,607,451,486,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1327", "html_url": "https://github.com/huggingface/datasets/pull/1327", "diff_url": "https://github.com/huggingface/datasets/pull/1327.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1327.patch", "merged_at": 1607451486000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1327/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1326/comments
https://api.github.com/repos/huggingface/datasets/issues/1326/events
https://github.com/huggingface/datasets/pull/1326
759,611,784
MDExOlB1bGxSZXF1ZXN0NTM0NTg2ODY4
1,326
TEP: Tehran English-Persian parallel corpus
{ "login": "spatil6", "id": 6419011, "node_id": "MDQ6VXNlcjY0MTkwMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spatil6", "html_url": "https://github.com/spatil6", "followers_url": "https://api.github.com/users/spatil6/followers", "following_url": "https://api.github.com/users/spatil6/following{/other_user}", "gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}", "starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spatil6/subscriptions", "organizations_url": "https://api.github.com/users/spatil6/orgs", "repos_url": "https://api.github.com/users/spatil6/repos", "events_url": "https://api.github.com/users/spatil6/events{/privacy}", "received_events_url": "https://api.github.com/users/spatil6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,446,613,000
1,608,389,703,000
1,607,599,517,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1326", "html_url": "https://github.com/huggingface/datasets/pull/1326", "diff_url": "https://github.com/huggingface/datasets/pull/1326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1326.patch", "merged_at": 1607599517000 }
TEP: Tehran English-Persian parallel corpus more info : http://opus.nlpl.eu/TEP.php
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1326/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1325/comments
https://api.github.com/repos/huggingface/datasets/issues/1325/events
https://github.com/huggingface/datasets/pull/1325
759,595,556
MDExOlB1bGxSZXF1ZXN0NTM0NTczNjM2
1,325
Add humicroedit dataset
{ "login": "saradhix", "id": 1351362, "node_id": "MDQ6VXNlcjEzNTEzNjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1351362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saradhix", "html_url": "https://github.com/saradhix", "followers_url": "https://api.github.com/users/saradhix/followers", "following_url": "https://api.github.com/users/saradhix/following{/other_user}", "gists_url": "https://api.github.com/users/saradhix/gists{/gist_id}", "starred_url": "https://api.github.com/users/saradhix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saradhix/subscriptions", "organizations_url": "https://api.github.com/users/saradhix/orgs", "repos_url": "https://api.github.com/users/saradhix/repos", "events_url": "https://api.github.com/users/saradhix/events{/privacy}", "received_events_url": "https://api.github.com/users/saradhix/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Updated the commit with the generated yaml tags", "merging since the CI is fixed on master" ]
1,607,445,346,000
1,608,227,949,000
1,608,227,949,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1325", "html_url": "https://github.com/huggingface/datasets/pull/1325", "diff_url": "https://github.com/huggingface/datasets/pull/1325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1325.patch", "merged_at": 1608227949000 }
Pull request for adding humicroedit dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1325/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1324/comments
https://api.github.com/repos/huggingface/datasets/issues/1324/events
https://github.com/huggingface/datasets/issues/1324
759,587,864
MDU6SXNzdWU3NTk1ODc4NjQ=
1,324
❓ Sharing ElasticSearch indexed dataset
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Hello @pietrolesci , I am not sure to understand what you are trying to do here.\r\n\r\nIf you're looking for ways to save a dataset on disk, you can you the `save_to_disk` method:\r\n```python\r\n>>> import datasets\r\n>>> loaded_dataset = datasets.load(\"dataset_name\")\r\n>>> loaded_dataset.save_to_disk(\"/path/on/your/disk\")\r\n```\r\n\r\nThe saved dataset can later be retrieved using:\r\n```python\r\n>>> loaded_dataset = datasets.Dataset.load_from_disk(\"/path/on/your/disk\")\r\n```\r\n\r\nAlso, I'd recommend posting your question directly in the issue section of the [elasticsearch repo](https://github.com/elastic/elasticsearch)", "Hi @SBrandeis,\n\nThanks a lot for picking up my request. \n\nMaybe I can clarify my use-case with a bit of context. Say I have the IMDb dataset. I create an ES index on it. Now I can save and reload the dataset from disk normally. Once I reload the dataset, it is easy to retrieve the ES index on my machine. I was wondering: is there a way I can share the (now) indexed version of the IMDb dataset with my colleagues without requiring them to re-index it?\n\nThanks a lot in advance for your consideration.\n\nBest,\n\nPietro", "Thanks for the clarification.\r\n\r\nI am not familiar with ElasticSearch, but if I understand well you're trying to migrate your data along with the ES index.\r\nMy advice would be to check out ES documentation, for instance, this might help you: https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html\r\n\r\nLet me know if it helps" ]
1,607,444,758,000
1,608,623,456,000
null
NONE
null
null
null
Hi there, First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing. **Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering - how can I know where it has been saved? - how can I share the indexed dataset with others? I tried to dig into the docs, but could not find anything about that. Thank you very much for your help. Best, Pietro Edit: apologies for the wrong label
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1324/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1323/comments
https://api.github.com/repos/huggingface/datasets/issues/1323/events
https://github.com/huggingface/datasets/pull/1323
759,581,919
MDExOlB1bGxSZXF1ZXN0NTM0NTYyNDQ0
1,323
Add CC-News dataset of English language articles
{ "login": "vblagoje", "id": 458335, "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vblagoje", "html_url": "https://github.com/vblagoje", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "repos_url": "https://api.github.com/users/vblagoje/repos", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@vblagoje nice work, please add the README.md file and it would be ready", "@lhoestq @tanmoyio @yjernite please have a look at the dataset card. Don't forget that the dataset is still hosted on my private gs bucket and should eventually be moved to the HF bucket", "I will move the files soon and ping you when it's done and with the new URLs :) ", "Hi !\r\n\r\nI just moved the file to a HF bucket. It's available at https://storage.googleapis.com/huggingface-nlp/datasets/cc_news/cc_news.tar.gz\r\n\r\nSorry for the delay ^^'", "@lhoestq no worries, updated PR with the new URL and rebased to master\r\n" ]
1,607,444,295,000
1,612,198,549,000
1,612,198,549,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1323", "html_url": "https://github.com/huggingface/datasets/pull/1323", "diff_url": "https://github.com/huggingface/datasets/pull/1323.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1323.patch", "merged_at": 1612198549000 }
Adds [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/) dataset. It contains 708241 English language news articles. Although each article has a language field these tags are not reliable. I've used Spacy language detection [pipeline](https://spacy.io/universe/project/spacy-langdetect) to confirm that the article language is indeed English. The prepared dataset is temporarily hosted on my private Google Storage [bucket](https://storage.googleapis.com/hf_datasets/cc_news.tar.gz). We can move it to HF storage and update this PR before merging.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1323/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1322/comments
https://api.github.com/repos/huggingface/datasets/issues/1322/events
https://github.com/huggingface/datasets/pull/1322
759,576,003
MDExOlB1bGxSZXF1ZXN0NTM0NTU3Njg3
1,322
add indonlu benchmark datasets
{ "login": "yasirabd", "id": 6518504, "node_id": "MDQ6VXNlcjY1MTg1MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6518504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yasirabd", "html_url": "https://github.com/yasirabd", "followers_url": "https://api.github.com/users/yasirabd/followers", "following_url": "https://api.github.com/users/yasirabd/following{/other_user}", "gists_url": "https://api.github.com/users/yasirabd/gists{/gist_id}", "starred_url": "https://api.github.com/users/yasirabd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yasirabd/subscriptions", "organizations_url": "https://api.github.com/users/yasirabd/orgs", "repos_url": "https://api.github.com/users/yasirabd/repos", "events_url": "https://api.github.com/users/yasirabd/events{/privacy}", "received_events_url": "https://api.github.com/users/yasirabd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,443,858,000
1,607,825,487,000
1,607,824,468,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1322", "html_url": "https://github.com/huggingface/datasets/pull/1322", "diff_url": "https://github.com/huggingface/datasets/pull/1322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1322.patch", "merged_at": null }
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1322/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1321/comments
https://api.github.com/repos/huggingface/datasets/issues/1321/events
https://github.com/huggingface/datasets/pull/1321
759,573,610
MDExOlB1bGxSZXF1ZXN0NTM0NTU1Nzg1
1,321
added dutch_social
{ "login": "skyprince999", "id": 9033954, "node_id": "MDQ6VXNlcjkwMzM5NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skyprince999", "html_url": "https://github.com/skyprince999", "followers_url": "https://api.github.com/users/skyprince999/followers", "following_url": "https://api.github.com/users/skyprince999/following{/other_user}", "gists_url": "https://api.github.com/users/skyprince999/gists{/gist_id}", "starred_url": "https://api.github.com/users/skyprince999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skyprince999/subscriptions", "organizations_url": "https://api.github.com/users/skyprince999/orgs", "repos_url": "https://api.github.com/users/skyprince999/repos", "events_url": "https://api.github.com/users/skyprince999/events{/privacy}", "received_events_url": "https://api.github.com/users/skyprince999/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq \r\nUpdated the `dummy_data.zip `(<10kb)I had to reduce it to just a few samples. \r\nTrain-Test-Dev (20-5-5 samples) \r\n\r\nBut the push also added changes from other PRs (probably because of a rebase!) So the files changed tab shows 466 files were changed! \r\n", "Thanks ! The dummy data are all good now :) \r\n\r\nLooks like this PR includes changes to many other files than the ones for dutch_social now.\r\n\r\nCan you create another branch and another PR please ?", "> \r\n> Can you create another branch and another PR please ?\r\n@lhoestq \r\n\r\nI did a rebase. Now it doesn't include the other files. Does that help? \r\n\r\n", "Yes thanks !" ]
1,607,443,674,000
1,608,113,657,000
1,608,113,657,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1321", "html_url": "https://github.com/huggingface/datasets/pull/1321", "diff_url": "https://github.com/huggingface/datasets/pull/1321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1321.patch", "merged_at": 1608113657000 }
The Dutch social media tweets dataset. Which has a total of more than 210k tweets in dutch language. These tweets have been machine annotated with sentiment scores (`label` feature) and `industry` and `hisco_codes` It can be used for sentiment analysis, multi-label classification and entity tagging
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1321/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1320/comments
https://api.github.com/repos/huggingface/datasets/issues/1320/events
https://github.com/huggingface/datasets/pull/1320
759,566,148
MDExOlB1bGxSZXF1ZXN0NTM0NTUwMDM4
1,320
Added the WikiText-TL39 dataset and corresponding card
{ "login": "jcblaisecruz02", "id": 24757547, "node_id": "MDQ6VXNlcjI0NzU3NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcblaisecruz02", "html_url": "https://github.com/jcblaisecruz02", "followers_url": "https://api.github.com/users/jcblaisecruz02/followers", "following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}", "gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions", "organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs", "repos_url": "https://api.github.com/users/jcblaisecruz02/repos", "events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}", "received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,443,226,000
1,607,599,493,000
1,607,599,493,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1320", "html_url": "https://github.com/huggingface/datasets/pull/1320", "diff_url": "https://github.com/huggingface/datasets/pull/1320.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1320.patch", "merged_at": 1607599492000 }
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Restarted a new pull request since there were problems with the earlier one. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1320/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1319/comments
https://api.github.com/repos/huggingface/datasets/issues/1319/events
https://github.com/huggingface/datasets/pull/1319
759,565,923
MDExOlB1bGxSZXF1ZXN0NTM0NTQ5ODU5
1,319
adding wili-2018 language identification dataset
{ "login": "Shubhambindal2017", "id": 31540058, "node_id": "MDQ6VXNlcjMxNTQwMDU4", "avatar_url": "https://avatars.githubusercontent.com/u/31540058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shubhambindal2017", "html_url": "https://github.com/Shubhambindal2017", "followers_url": "https://api.github.com/users/Shubhambindal2017/followers", "following_url": "https://api.github.com/users/Shubhambindal2017/following{/other_user}", "gists_url": "https://api.github.com/users/Shubhambindal2017/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shubhambindal2017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shubhambindal2017/subscriptions", "organizations_url": "https://api.github.com/users/Shubhambindal2017/orgs", "repos_url": "https://api.github.com/users/Shubhambindal2017/repos", "events_url": "https://api.github.com/users/Shubhambindal2017/events{/privacy}", "received_events_url": "https://api.github.com/users/Shubhambindal2017/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Not sure what happened, I just changed the py file but it is showing some TensorFlow error now.", "You can ignore it.\r\nIt's caused by the Tensorflow update that happened 30min ago. They added breaking changes.\r\nI'm working on a fix on the master branch right now\r\n", "oh okay, btw I have made the required change for reading the CSV, I think it should be fine now, please take a look at it when you have some time.", "merging since the CI is fixed on master" ]
1,607,443,209,000
1,607,980,832,000
1,607,980,832,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1319", "html_url": "https://github.com/huggingface/datasets/pull/1319", "diff_url": "https://github.com/huggingface/datasets/pull/1319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1319.patch", "merged_at": 1607980832000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1319/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1318/comments
https://api.github.com/repos/huggingface/datasets/issues/1318/events
https://github.com/huggingface/datasets/pull/1318
759,565,629
MDExOlB1bGxSZXF1ZXN0NTM0NTQ5NjE3
1,318
ethos first commit
{ "login": "iamollas", "id": 22838900, "node_id": "MDQ6VXNlcjIyODM4OTAw", "avatar_url": "https://avatars.githubusercontent.com/u/22838900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iamollas", "html_url": "https://github.com/iamollas", "followers_url": "https://api.github.com/users/iamollas/followers", "following_url": "https://api.github.com/users/iamollas/following{/other_user}", "gists_url": "https://api.github.com/users/iamollas/gists{/gist_id}", "starred_url": "https://api.github.com/users/iamollas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iamollas/subscriptions", "organizations_url": "https://api.github.com/users/iamollas/orgs", "repos_url": "https://api.github.com/users/iamollas/repos", "events_url": "https://api.github.com/users/iamollas/events{/privacy}", "received_events_url": "https://api.github.com/users/iamollas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Nice thanks !\r\n> \r\n> I left a few comments\r\n> \r\n> Also it looks like this PR includes changes about other files than the ones for ethos\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\n@lhoestq Should I close this PR? The new one is the: #1453", "You can create another PR and close this one if you don't mind", "> You can create another PR and close this one if you don't mind\r\n\r\nPerfect! You should see the #1453 PR for the fixed version! Thanks" ]
1,607,443,187,000
1,607,611,557,000
1,607,611,557,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1318", "html_url": "https://github.com/huggingface/datasets/pull/1318", "diff_url": "https://github.com/huggingface/datasets/pull/1318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1318.patch", "merged_at": null }
Ethos passed all the tests except from this one: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name> with this error: E OSError: Cannot find data file. E Original error: E [Errno 2] No such file or directory:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1318/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1317/comments
https://api.github.com/repos/huggingface/datasets/issues/1317/events
https://github.com/huggingface/datasets/pull/1317
759,553,495
MDExOlB1bGxSZXF1ZXN0NTM0NTM5NTQ5
1,317
add 10k German News Article Dataset
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You can just create another branch from master on your fork and create another PR:\r\n\r\nfirst update your master branch\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push\r\n```\r\n\r\nthen create a new branch\r\n```\r\ngit checkout -b my-new-branch-name\r\n```\r\n\r\nThen you can add, commit and push the gnad10 files and open a new PR", "closing in favor of #1572 " ]
1,607,442,265,000
1,631,897,751,000
1,608,137,443,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1317", "html_url": "https://github.com/huggingface/datasets/pull/1317", "diff_url": "https://github.com/huggingface/datasets/pull/1317.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1317.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1317/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1316/comments
https://api.github.com/repos/huggingface/datasets/issues/1316/events
https://github.com/huggingface/datasets/pull/1316
759,549,601
MDExOlB1bGxSZXF1ZXN0NTM0NTM2Mzc1
1,316
Allow GitHub releases as dataset source
{ "login": "benjaminvdb", "id": 8875786, "node_id": "MDQ6VXNlcjg4NzU3ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminvdb", "html_url": "https://github.com/benjaminvdb", "followers_url": "https://api.github.com/users/benjaminvdb/followers", "following_url": "https://api.github.com/users/benjaminvdb/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminvdb/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminvdb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminvdb/subscriptions", "organizations_url": "https://api.github.com/users/benjaminvdb/orgs", "repos_url": "https://api.github.com/users/benjaminvdb/repos", "events_url": "https://api.github.com/users/benjaminvdb/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminvdb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,441,975,000
1,607,595,120,000
1,607,595,120,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1316", "html_url": "https://github.com/huggingface/datasets/pull/1316", "diff_url": "https://github.com/huggingface/datasets/pull/1316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1316.patch", "merged_at": 1607595120000 }
# Summary Providing a GitHub release URL to `DownloadManager.download()` currently throws a `ConnectionError: Couldn't reach [DOWNLOAD_URL]`. This PR fixes this problem by adding an exception for GitHub releases in `datasets.utils.file_utils.get_from_cache()`. # Reproduce ``` import datasets url = 'http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz' result = datasets.utils.file_utils.get_from_cache(url) # Returns: ConnectionError: Couldn't reach http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz ``` # Cause GitHub releases returns a HTTP status 403 (FOUND), indicating that the request is being redirected (to AWS S3, in this case). `get_from_cache()` checks whether the status is 200 (OK) or if it is part of two exceptions (Google Drive or Firebase), otherwise the mentioned error is thrown. # Solution Just like the exceptions for Google Drive and Firebase, add a condition for GitHub releases URLs that return the HTTP status 403. If this is the case, continue normally.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1316/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1315/comments
https://api.github.com/repos/huggingface/datasets/issues/1315/events
https://github.com/huggingface/datasets/pull/1315
759,548,706
MDExOlB1bGxSZXF1ZXN0NTM0NTM1NjM4
1,315
add yelp_review_full
{ "login": "hfawaz", "id": 29229602, "node_id": "MDQ6VXNlcjI5MjI5NjAy", "avatar_url": "https://avatars.githubusercontent.com/u/29229602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hfawaz", "html_url": "https://github.com/hfawaz", "followers_url": "https://api.github.com/users/hfawaz/followers", "following_url": "https://api.github.com/users/hfawaz/following{/other_user}", "gists_url": "https://api.github.com/users/hfawaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/hfawaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hfawaz/subscriptions", "organizations_url": "https://api.github.com/users/hfawaz/orgs", "repos_url": "https://api.github.com/users/hfawaz/repos", "events_url": "https://api.github.com/users/hfawaz/events{/privacy}", "received_events_url": "https://api.github.com/users/hfawaz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,441,907,000
1,607,529,349,000
1,607,529,349,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1315", "html_url": "https://github.com/huggingface/datasets/pull/1315", "diff_url": "https://github.com/huggingface/datasets/pull/1315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1315.patch", "merged_at": 1607529348000 }
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353 I included the dataset card.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1315/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1314/comments
https://api.github.com/repos/huggingface/datasets/issues/1314/events
https://github.com/huggingface/datasets/pull/1314
759,541,937
MDExOlB1bGxSZXF1ZXN0NTM0NTMwMDE5
1,314
Add snips built in intents 2016 12
{ "login": "bduvenhage", "id": 8405335, "node_id": "MDQ6VXNlcjg0MDUzMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8405335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bduvenhage", "html_url": "https://github.com/bduvenhage", "followers_url": "https://api.github.com/users/bduvenhage/followers", "following_url": "https://api.github.com/users/bduvenhage/following{/other_user}", "gists_url": "https://api.github.com/users/bduvenhage/gists{/gist_id}", "starred_url": "https://api.github.com/users/bduvenhage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bduvenhage/subscriptions", "organizations_url": "https://api.github.com/users/bduvenhage/orgs", "repos_url": "https://api.github.com/users/bduvenhage/repos", "events_url": "https://api.github.com/users/bduvenhage/events{/privacy}", "received_events_url": "https://api.github.com/users/bduvenhage/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It is not clear how to automatically add the dummy data if the source data is a more complex json format. Should I manually take a fraction of the source data and include it as dummy data?\r\n", "Added a fraction of the real data as dummy data.", "merging since the CI is fixed on master" ]
1,607,441,419,000
1,607,939,947,000
1,607,939,947,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1314", "html_url": "https://github.com/huggingface/datasets/pull/1314", "diff_url": "https://github.com/huggingface/datasets/pull/1314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1314.patch", "merged_at": 1607939946000 }
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1314/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1313/comments
https://api.github.com/repos/huggingface/datasets/issues/1313/events
https://github.com/huggingface/datasets/pull/1313
759,536,512
MDExOlB1bGxSZXF1ZXN0NTM0NTI1NjE3
1,313
Add HateSpeech Corpus for Polish
{ "login": "kacperlukawski", "id": 2649301, "node_id": "MDQ6VXNlcjI2NDkzMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2649301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kacperlukawski", "html_url": "https://github.com/kacperlukawski", "followers_url": "https://api.github.com/users/kacperlukawski/followers", "following_url": "https://api.github.com/users/kacperlukawski/following{/other_user}", "gists_url": "https://api.github.com/users/kacperlukawski/gists{/gist_id}", "starred_url": "https://api.github.com/users/kacperlukawski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kacperlukawski/subscriptions", "organizations_url": "https://api.github.com/users/kacperlukawski/orgs", "repos_url": "https://api.github.com/users/kacperlukawski/repos", "events_url": "https://api.github.com/users/kacperlukawski/events{/privacy}", "received_events_url": "https://api.github.com/users/kacperlukawski/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Do you think using the ClassLabel is correct if we don't know the meaning of them?", "Once we find out the meanings we can still add them to the dataset card", "Feel free to ping me when the PR is ready for the final review" ]
1,607,441,033,000
1,608,137,325,000
1,608,137,325,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1313", "html_url": "https://github.com/huggingface/datasets/pull/1313", "diff_url": "https://github.com/huggingface/datasets/pull/1313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1313.patch", "merged_at": 1608137325000 }
This PR adds a HateSpeech Corpus for Polish, containing offensive language examples. - **Homepage:** http://zil.ipipan.waw.pl/HateSpeech - **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1313/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1312/comments
https://api.github.com/repos/huggingface/datasets/issues/1312/events
https://github.com/huggingface/datasets/pull/1312
759,532,626
MDExOlB1bGxSZXF1ZXN0NTM0NTIyMzc1
1,312
Jigsaw toxicity pred
{ "login": "taihim", "id": 13764071, "node_id": "MDQ6VXNlcjEzNzY0MDcx", "avatar_url": "https://avatars.githubusercontent.com/u/13764071?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taihim", "html_url": "https://github.com/taihim", "followers_url": "https://api.github.com/users/taihim/followers", "following_url": "https://api.github.com/users/taihim/following{/other_user}", "gists_url": "https://api.github.com/users/taihim/gists{/gist_id}", "starred_url": "https://api.github.com/users/taihim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taihim/subscriptions", "organizations_url": "https://api.github.com/users/taihim/orgs", "repos_url": "https://api.github.com/users/taihim/repos", "events_url": "https://api.github.com/users/taihim/events{/privacy}", "received_events_url": "https://api.github.com/users/taihim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,440,754,000
1,607,688,692,000
1,607,688,692,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1312", "html_url": "https://github.com/huggingface/datasets/pull/1312", "diff_url": "https://github.com/huggingface/datasets/pull/1312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1312.patch", "merged_at": null }
Requires manually downloading data from Kaggle.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1312/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1311/comments
https://api.github.com/repos/huggingface/datasets/issues/1311/events
https://github.com/huggingface/datasets/pull/1311
759,514,819
MDExOlB1bGxSZXF1ZXN0NTM0NTA3NjM1
1,311
Add OPUS Bible Corpus (102 Languages)
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq done" ]
1,607,439,428,000
1,607,527,857,000
1,607,527,856,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1311", "html_url": "https://github.com/huggingface/datasets/pull/1311", "diff_url": "https://github.com/huggingface/datasets/pull/1311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1311.patch", "merged_at": 1607527856000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1311/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1310/comments
https://api.github.com/repos/huggingface/datasets/issues/1310/events
https://github.com/huggingface/datasets/pull/1310
759,508,921
MDExOlB1bGxSZXF1ZXN0NTM0NTAyNzE5
1,310
Add OffensEval-TR 2020 Dataset
{ "login": "yavuzKomecoglu", "id": 5150963, "node_id": "MDQ6VXNlcjUxNTA5NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yavuzKomecoglu", "html_url": "https://github.com/yavuzKomecoglu", "followers_url": "https://api.github.com/users/yavuzKomecoglu/followers", "following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}", "gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}", "starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions", "organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs", "repos_url": "https://api.github.com/users/yavuzKomecoglu/repos", "events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}", "received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq, can you please review this PR? ", "> Awesome thank you !\r\n\r\nThanks for the small fixes @lhoestq ", "@coltekin, we have added the data set that you created an article that says \"Turkish Attack Language Community in Social Media\", HuggingFace dataset update sprint for you. We added Sprint quickly for a short time. I hope you welcome it too. The dataset is accessible at https://huggingface.co/datasets/offenseval2020_tr. ", "Thank you for the heads up. I am not familiar with the terminology above (no idea what a sprint is), but I am happy that you found the data useful. Please feel free to distribute/use it as you see fit.\r\n\r\nThe OffensEval version you included in your data set has only binary labels. There is also a version [here](https://coltekin.github.io/offensive-turkish/troff-v1.0.tsv.gz) which also includes fine-grained labels similar to the OffensEval English data set - Just in case it would be of interest.\r\n\r\nIf you have questions about the data set, or need more information please let me know." ]
1,607,438,991,000
1,607,782,542,000
1,607,529,726,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1310", "html_url": "https://github.com/huggingface/datasets/pull/1310", "diff_url": "https://github.com/huggingface/datasets/pull/1310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1310.patch", "merged_at": 1607529726000 }
This PR adds the OffensEval-TR 2020 dataset which is a Turkish offensive language corpus by me and @basakbuluz. The corpus consist of randomly sampled tweets and annotated in a similar way to [OffensEval](https://sites.google.com/site/offensevalsharedtask/) and [GermEval](https://projects.fzai.h-da.de/iggsa/). - **Homepage:** [offensive-turkish](https://coltekin.github.io/offensive-turkish/) - **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https://coltekin.github.io/offensive-turkish/troff.pdf) - **Point of Contact:** [Çağrı Çöltekin](ccoltekin@sfs.uni-tuebingen.de)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1310/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1309/comments
https://api.github.com/repos/huggingface/datasets/issues/1309/events
https://github.com/huggingface/datasets/pull/1309
759,501,370
MDExOlB1bGxSZXF1ZXN0NTM0NDk2NTYx
1,309
Add SAMSum Corpus dataset
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "also to fix the check_code_quality CI you have to remove the imports of the unused `csv` and `os`", "@lhoestq Thanks for the review! I have done what you asked, README is also updated. 🤗 \r\nThe CI fails because of the added dependency. I have never used circleCI before, so I am curious how will you solve that?", "I just added `py7zr` to our test dependencies", "merging since the CI is fixed on master", "Thanks! 🤗 " ]
1,607,438,456,000
1,607,949,153,000
1,607,941,255,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1309", "html_url": "https://github.com/huggingface/datasets/pull/1309", "diff_url": "https://github.com/huggingface/datasets/pull/1309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1309.patch", "merged_at": 1607941255000 }
Did not spent much time writing README, might update later. Copied description and some stuff from tensorflow_datasets https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/samsum.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1309/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1308/comments
https://api.github.com/repos/huggingface/datasets/issues/1308/events
https://github.com/huggingface/datasets/pull/1308
759,492,953
MDExOlB1bGxSZXF1ZXN0NTM0NDg5Nzcw
1,308
Add Wiki Lingua Dataset
{ "login": "katnoria", "id": 7674948, "node_id": "MDQ6VXNlcjc2NzQ5NDg=", "avatar_url": "https://avatars.githubusercontent.com/u/7674948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/katnoria", "html_url": "https://github.com/katnoria", "followers_url": "https://api.github.com/users/katnoria/followers", "following_url": "https://api.github.com/users/katnoria/following{/other_user}", "gists_url": "https://api.github.com/users/katnoria/gists{/gist_id}", "starred_url": "https://api.github.com/users/katnoria/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/katnoria/subscriptions", "organizations_url": "https://api.github.com/users/katnoria/orgs", "repos_url": "https://api.github.com/users/katnoria/repos", "events_url": "https://api.github.com/users/katnoria/events{/privacy}", "received_events_url": "https://api.github.com/users/katnoria/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I am done adding the dataset. Requesting to review and advise.", "looks like this PR has changes about many other files than the ones for WIki Lingua \r\n\r\nCan you create another branch and another PR please ?", "Any reason to have english as the default config over the other languages ?", "> looks like this PR has changes about many other files than the ones for WIki Lingua\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\nOk, I will create another branch and submit a fresh PR.", "> Any reason to have english as the default config over the other languages ?\r\n\r\nThe data for all other languages has a direct reference to English article. Thus, I kept English as default.", "closing in favor of #1470 " ]
1,607,437,813,000
1,607,942,392,000
1,607,942,392,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1308", "html_url": "https://github.com/huggingface/datasets/pull/1308", "diff_url": "https://github.com/huggingface/datasets/pull/1308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1308.patch", "merged_at": null }
Hello, This is my first PR. I have added Wiki Lingua Dataset along with dataset card to the best of my knowledge. There was one hiccup though. I was unable to create dummy data because the data is in pkl format. From the document, I see that: ```At the moment it supports data files in the following format: txt, csv, tsv, jsonl, json, xml```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1308/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1307/comments
https://api.github.com/repos/huggingface/datasets/issues/1307/events
https://github.com/huggingface/datasets/pull/1307
759,458,835
MDExOlB1bGxSZXF1ZXN0NTM0NDYxODc5
1,307
adding capes
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,435,173,000
1,607,528,409,000
1,607,527,665,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1307", "html_url": "https://github.com/huggingface/datasets/pull/1307", "diff_url": "https://github.com/huggingface/datasets/pull/1307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1307.patch", "merged_at": 1607527665000 }
Adding Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1307/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1306/comments
https://api.github.com/repos/huggingface/datasets/issues/1306/events
https://github.com/huggingface/datasets/pull/1306
759,448,427
MDExOlB1bGxSZXF1ZXN0NTM0NDUzMTU1
1,306
add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC)
{ "login": "aseifert", "id": 4944799, "node_id": "MDQ6VXNlcjQ5NDQ3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aseifert", "html_url": "https://github.com/aseifert", "followers_url": "https://api.github.com/users/aseifert/followers", "following_url": "https://api.github.com/users/aseifert/following{/other_user}", "gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}", "starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aseifert/subscriptions", "organizations_url": "https://api.github.com/users/aseifert/orgs", "repos_url": "https://api.github.com/users/aseifert/repos", "events_url": "https://api.github.com/users/aseifert/events{/privacy}", "received_events_url": "https://api.github.com/users/aseifert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I created a clean PR where I also incorporated the suggested changes here: https://github.com/huggingface/datasets/pull/1449\r\n" ]
1,607,434,294,000
1,607,594,034,000
1,607,594,008,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1306", "html_url": "https://github.com/huggingface/datasets/pull/1306", "diff_url": "https://github.com/huggingface/datasets/pull/1306.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1306.patch", "merged_at": null }
- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC) - **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP. ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1306/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1305/comments
https://api.github.com/repos/huggingface/datasets/issues/1305/events
https://github.com/huggingface/datasets/pull/1305
759,446,665
MDExOlB1bGxSZXF1ZXN0NTM0NDUxNzEx
1,305
[README] Added Windows command to enable slow tests
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,434,144,000
1,607,435,793,000
1,607,435,792,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1305", "html_url": "https://github.com/huggingface/datasets/pull/1305", "diff_url": "https://github.com/huggingface/datasets/pull/1305.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1305.patch", "merged_at": 1607435792000 }
The Windows command to run slow tests has caused issues, so this adds a functional Windows command.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1305/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1305/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1304/comments
https://api.github.com/repos/huggingface/datasets/issues/1304/events
https://github.com/huggingface/datasets/pull/1304
759,440,841
MDExOlB1bGxSZXF1ZXN0NTM0NDQ2Nzcy
1,304
adding eitb_parcc
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,433,654,000
1,607,536,974,000
1,607,536,923,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1304", "html_url": "https://github.com/huggingface/datasets/pull/1304", "diff_url": "https://github.com/huggingface/datasets/pull/1304.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1304.patch", "merged_at": 1607536923000 }
Adding EiTB-ParCC: Parallel Corpus of Comparable News http://opus.nlpl.eu/EiTB-ParCC.php
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1304/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1303/comments
https://api.github.com/repos/huggingface/datasets/issues/1303/events
https://github.com/huggingface/datasets/pull/1303
759,440,484
MDExOlB1bGxSZXF1ZXN0NTM0NDQ2NDg0
1,303
adding opus_openoffice
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,433,621,000
1,607,593,030,000
1,607,593,030,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1303", "html_url": "https://github.com/huggingface/datasets/pull/1303", "diff_url": "https://github.com/huggingface/datasets/pull/1303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1303.patch", "merged_at": 1607593030000 }
Adding Opus OpenOffice: http://opus.nlpl.eu/OpenOffice.php 8 languages, 28 bitexts
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1303/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1302/comments
https://api.github.com/repos/huggingface/datasets/issues/1302/events
https://github.com/huggingface/datasets/pull/1302
759,435,740
MDExOlB1bGxSZXF1ZXN0NTM0NDQyNTA0
1,302
Add Danish NER dataset
{ "login": "ophelielacroix", "id": 28562991, "node_id": "MDQ6VXNlcjI4NTYyOTkx", "avatar_url": "https://avatars.githubusercontent.com/u/28562991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ophelielacroix", "html_url": "https://github.com/ophelielacroix", "followers_url": "https://api.github.com/users/ophelielacroix/followers", "following_url": "https://api.github.com/users/ophelielacroix/following{/other_user}", "gists_url": "https://api.github.com/users/ophelielacroix/gists{/gist_id}", "starred_url": "https://api.github.com/users/ophelielacroix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ophelielacroix/subscriptions", "organizations_url": "https://api.github.com/users/ophelielacroix/orgs", "repos_url": "https://api.github.com/users/ophelielacroix/repos", "events_url": "https://api.github.com/users/ophelielacroix/events{/privacy}", "received_events_url": "https://api.github.com/users/ophelielacroix/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,433,234,000
1,607,592,926,000
1,607,592,926,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1302", "html_url": "https://github.com/huggingface/datasets/pull/1302", "diff_url": "https://github.com/huggingface/datasets/pull/1302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1302.patch", "merged_at": 1607592926000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1302/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1301/comments
https://api.github.com/repos/huggingface/datasets/issues/1301/events
https://github.com/huggingface/datasets/pull/1301
759,419,945
MDExOlB1bGxSZXF1ZXN0NTM0NDI5MjAy
1,301
arxiv dataset added
{ "login": "tanmoyio", "id": 33005287, "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmoyio", "html_url": "https://github.com/tanmoyio", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "repos_url": "https://api.github.com/users/tanmoyio/repos", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Readme added\r\n", "@lhoestq is it looking alright ? " ]
1,607,431,851,000
1,607,537,116,000
1,607,537,116,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1301", "html_url": "https://github.com/huggingface/datasets/pull/1301", "diff_url": "https://github.com/huggingface/datasets/pull/1301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1301.patch", "merged_at": 1607537116000 }
**adding arXiv dataset**: arXiv dataset and metadata of 1.7M+ scholarly papers across STEM dataset link: https://www.kaggle.com/Cornell-University/arxiv
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1301/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1300/comments
https://api.github.com/repos/huggingface/datasets/issues/1300/events
https://github.com/huggingface/datasets/pull/1300
759,418,122
MDExOlB1bGxSZXF1ZXN0NTM0NDI3Njk1
1,300
added dutch_social
{ "login": "skyprince999", "id": 9033954, "node_id": "MDQ6VXNlcjkwMzM5NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skyprince999", "html_url": "https://github.com/skyprince999", "followers_url": "https://api.github.com/users/skyprince999/followers", "following_url": "https://api.github.com/users/skyprince999/following{/other_user}", "gists_url": "https://api.github.com/users/skyprince999/gists{/gist_id}", "starred_url": "https://api.github.com/users/skyprince999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skyprince999/subscriptions", "organizations_url": "https://api.github.com/users/skyprince999/orgs", "repos_url": "https://api.github.com/users/skyprince999/repos", "events_url": "https://api.github.com/users/skyprince999/events{/privacy}", "received_events_url": "https://api.github.com/users/skyprince999/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closing this since a new pull request has been made. " ]
1,607,431,670,000
1,607,443,745,000
1,607,443,745,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1300", "html_url": "https://github.com/huggingface/datasets/pull/1300", "diff_url": "https://github.com/huggingface/datasets/pull/1300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1300.patch", "merged_at": null }
WIP As some tests did not clear! 👎🏼
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1300/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1299/comments
https://api.github.com/repos/huggingface/datasets/issues/1299/events
https://github.com/huggingface/datasets/issues/1299
759,414,566
MDU6SXNzdWU3NTk0MTQ1NjY=
1,299
can't load "german_legal_entity_recognition" dataset
{ "login": "nataly-obr", "id": 59837137, "node_id": "MDQ6VXNlcjU5ODM3MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/59837137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nataly-obr", "html_url": "https://github.com/nataly-obr", "followers_url": "https://api.github.com/users/nataly-obr/followers", "following_url": "https://api.github.com/users/nataly-obr/following{/other_user}", "gists_url": "https://api.github.com/users/nataly-obr/gists{/gist_id}", "starred_url": "https://api.github.com/users/nataly-obr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nataly-obr/subscriptions", "organizations_url": "https://api.github.com/users/nataly-obr/orgs", "repos_url": "https://api.github.com/users/nataly-obr/repos", "events_url": "https://api.github.com/users/nataly-obr/events{/privacy}", "received_events_url": "https://api.github.com/users/nataly-obr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Please if you could tell me more about the error? \r\n\r\n1. Please check the directory you've been working on\r\n2. Check for any typos", "> Please if you could tell me more about the error?\r\n> \r\n> 1. Please check the directory you've been working on\r\n> 2. Check for any typos\r\n\r\nError happens during the execution of this line:\r\ndataset = load_dataset(\"german_legal_entity_recognition\")\r\n\r\nAlso, when I try to open mentioned links via Opera I have errors \"404: Not Found\" and \"This XML file does not appear to have any style information associated with it. The document tree is shown below.\" respectively.", "Hello @nataly-obr, the `german_legal_entity_recognition` dataset has not yet been released (it is part of the coming soon v2 release).\r\n\r\nYou can still access it now if you want, but you will need to install `datasets` via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`\r\n\r\nPlease let me know if it solves the issue :) " ]
1,607,431,321,000
1,608,134,593,000
1,608,134,593,000
NONE
null
null
null
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1299/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1298/comments
https://api.github.com/repos/huggingface/datasets/issues/1298/events
https://github.com/huggingface/datasets/pull/1298
759,412,451
MDExOlB1bGxSZXF1ZXN0NTM0NDIyODQy
1,298
Add OPUS Ted Talks 2013
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,431,118,000
1,608,137,870,000
1,608,137,869,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1298", "html_url": "https://github.com/huggingface/datasets/pull/1298", "diff_url": "https://github.com/huggingface/datasets/pull/1298.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1298.patch", "merged_at": 1608137869000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1298/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1297/comments
https://api.github.com/repos/huggingface/datasets/issues/1297/events
https://github.com/huggingface/datasets/pull/1297
759,404,103
MDExOlB1bGxSZXF1ZXN0NTM0NDE1ODMx
1,297
OPUS Ted Talks 2013
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,430,339,000
1,607,430,950,000
1,607,430,950,000
MEMBER
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1297", "html_url": "https://github.com/huggingface/datasets/pull/1297", "diff_url": "https://github.com/huggingface/datasets/pull/1297.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1297.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1297/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1296/comments
https://api.github.com/repos/huggingface/datasets/issues/1296/events
https://github.com/huggingface/datasets/pull/1296
759,375,292
MDExOlB1bGxSZXF1ZXN0NTM0MzkxNzQ1
1,296
The Snips Built In Intents 2016 dataset.
{ "login": "bduvenhage", "id": 8405335, "node_id": "MDQ6VXNlcjg0MDUzMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8405335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bduvenhage", "html_url": "https://github.com/bduvenhage", "followers_url": "https://api.github.com/users/bduvenhage/followers", "following_url": "https://api.github.com/users/bduvenhage/following{/other_user}", "gists_url": "https://api.github.com/users/bduvenhage/gists{/gist_id}", "starred_url": "https://api.github.com/users/bduvenhage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bduvenhage/subscriptions", "organizations_url": "https://api.github.com/users/bduvenhage/orgs", "repos_url": "https://api.github.com/users/bduvenhage/repos", "events_url": "https://api.github.com/users/bduvenhage/events{/privacy}", "received_events_url": "https://api.github.com/users/bduvenhage/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It is not clear how to automatically add the dummy data if the source data is a more complex json format. Should I manually take a fraction of the source data and include it as dummy data?", "Will tag the dataset and update the dataset card." ]
1,607,427,610,000
1,607,441,272,000
1,607,441,272,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1296", "html_url": "https://github.com/huggingface/datasets/pull/1296", "diff_url": "https://github.com/huggingface/datasets/pull/1296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1296.patch", "merged_at": null }
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1296/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1295/comments
https://api.github.com/repos/huggingface/datasets/issues/1295/events
https://github.com/huggingface/datasets/pull/1295
759,375,251
MDExOlB1bGxSZXF1ZXN0NTM0MzkxNzE1
1,295
add hrenwac_para
{ "login": "IvanZidov", "id": 11391118, "node_id": "MDQ6VXNlcjExMzkxMTE4", "avatar_url": "https://avatars.githubusercontent.com/u/11391118?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IvanZidov", "html_url": "https://github.com/IvanZidov", "followers_url": "https://api.github.com/users/IvanZidov/followers", "following_url": "https://api.github.com/users/IvanZidov/following{/other_user}", "gists_url": "https://api.github.com/users/IvanZidov/gists{/gist_id}", "starred_url": "https://api.github.com/users/IvanZidov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IvanZidov/subscriptions", "organizations_url": "https://api.github.com/users/IvanZidov/orgs", "repos_url": "https://api.github.com/users/IvanZidov/repos", "events_url": "https://api.github.com/users/IvanZidov/events{/privacy}", "received_events_url": "https://api.github.com/users/IvanZidov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,427,606,000
1,607,708,540,000
1,607,708,540,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1295", "html_url": "https://github.com/huggingface/datasets/pull/1295", "diff_url": "https://github.com/huggingface/datasets/pull/1295.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1295.patch", "merged_at": 1607708540000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1295/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1294/comments
https://api.github.com/repos/huggingface/datasets/issues/1294/events
https://github.com/huggingface/datasets/pull/1294
759,365,246
MDExOlB1bGxSZXF1ZXN0NTM0MzgzMjg5
1,294
adding opus_euconst
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,426,656,000
1,607,453,060,000
1,607,452,883,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1294", "html_url": "https://github.com/huggingface/datasets/pull/1294", "diff_url": "https://github.com/huggingface/datasets/pull/1294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1294.patch", "merged_at": 1607452882000 }
Adding EUconst, a parallel corpus collected from the European Constitution. 21 languages, 210 bitexts
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1294/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1293/comments
https://api.github.com/repos/huggingface/datasets/issues/1293/events
https://github.com/huggingface/datasets/pull/1293
759,360,113
MDExOlB1bGxSZXF1ZXN0NTM0Mzc4OTQ0
1,293
add hrenwac_para
{ "login": "ivan-zidov", "id": 51969305, "node_id": "MDQ6VXNlcjUxOTY5MzA1", "avatar_url": "https://avatars.githubusercontent.com/u/51969305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ivan-zidov", "html_url": "https://github.com/ivan-zidov", "followers_url": "https://api.github.com/users/ivan-zidov/followers", "following_url": "https://api.github.com/users/ivan-zidov/following{/other_user}", "gists_url": "https://api.github.com/users/ivan-zidov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ivan-zidov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ivan-zidov/subscriptions", "organizations_url": "https://api.github.com/users/ivan-zidov/orgs", "repos_url": "https://api.github.com/users/ivan-zidov/repos", "events_url": "https://api.github.com/users/ivan-zidov/events{/privacy}", "received_events_url": "https://api.github.com/users/ivan-zidov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,426,201,000
1,607,427,287,000
1,607,427,278,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1293", "html_url": "https://github.com/huggingface/datasets/pull/1293", "diff_url": "https://github.com/huggingface/datasets/pull/1293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1293.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1293/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1292/comments
https://api.github.com/repos/huggingface/datasets/issues/1292/events
https://github.com/huggingface/datasets/pull/1292
759,354,627
MDExOlB1bGxSZXF1ZXN0NTM0Mzc0MzQ3
1,292
arXiv dataset added
{ "login": "tanmoyio", "id": 33005287, "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmoyio", "html_url": "https://github.com/tanmoyio", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "repos_url": "https://api.github.com/users/tanmoyio/repos", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,425,708,000
1,607,436,133,000
1,607,436,133,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1292", "html_url": "https://github.com/huggingface/datasets/pull/1292", "diff_url": "https://github.com/huggingface/datasets/pull/1292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1292.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1292/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1291/comments
https://api.github.com/repos/huggingface/datasets/issues/1291/events
https://github.com/huggingface/datasets/pull/1291
759,352,810
MDExOlB1bGxSZXF1ZXN0NTM0MzcyNzk2
1,291
adding pubmed_qa dataset
{ "login": "tuner007", "id": 46425391, "node_id": "MDQ6VXNlcjQ2NDI1Mzkx", "avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuner007", "html_url": "https://github.com/tuner007", "followers_url": "https://api.github.com/users/tuner007/followers", "following_url": "https://api.github.com/users/tuner007/following{/other_user}", "gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}", "starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuner007/subscriptions", "organizations_url": "https://api.github.com/users/tuner007/orgs", "repos_url": "https://api.github.com/users/tuner007/repos", "events_url": "https://api.github.com/users/tuner007/events{/privacy}", "received_events_url": "https://api.github.com/users/tuner007/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,425,544,000
1,607,504,090,000
1,607,504,090,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1291", "html_url": "https://github.com/huggingface/datasets/pull/1291", "diff_url": "https://github.com/huggingface/datasets/pull/1291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1291.patch", "merged_at": 1607504090000 }
Pubmed QA dataset: PQA-L(abeled) 1k PQA-U(labeled) 61.2k PQA-A(rtifical labeled) 211.3k
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1291/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1290/comments
https://api.github.com/repos/huggingface/datasets/issues/1290/events
https://github.com/huggingface/datasets/issues/1290
759,339,989
MDU6SXNzdWU3NTkzMzk5ODk=
1,290
imdb dataset cannot be downloaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @rabeehk , I am unable to reproduce your problem locally.\r\nCan you try emptying the cache (removing the content of `/idiap/temp/rkarimi/cache_home_1/datasets`) and retry ?", "Hi,\r\nthanks, I did remove the cache and still the same error here\r\n\r\n```\r\n>>> a = datasets.load_dataset(\"imdb\", split=\"train\")\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nDownloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 558, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 73, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=4902716, num_examples=3680, dataset_name='imdb')}]\r\n```\r\n\r\ndatasets version\r\n```\r\ndatasets 1.1.2 <pip>\r\ntensorflow-datasets 4.1.0 <pip>\r\n\r\n```", "resolved with moving to version 1.1.3" ]
1,607,424,456,000
1,608,831,489,000
1,608,831,489,000
CONTRIBUTOR
null
null
null
hi please find error below getting imdb train spli: thanks ` datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")` errors ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1290/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1289/comments
https://api.github.com/repos/huggingface/datasets/issues/1289/events
https://github.com/huggingface/datasets/pull/1289
759,333,684
MDExOlB1bGxSZXF1ZXN0NTM0MzU2ODUw
1,289
Jigsaw toxicity classification dataset added
{ "login": "taihim", "id": 13764071, "node_id": "MDQ6VXNlcjEzNzY0MDcx", "avatar_url": "https://avatars.githubusercontent.com/u/13764071?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taihim", "html_url": "https://github.com/taihim", "followers_url": "https://api.github.com/users/taihim/followers", "following_url": "https://api.github.com/users/taihim/following{/other_user}", "gists_url": "https://api.github.com/users/taihim/gists{/gist_id}", "starred_url": "https://api.github.com/users/taihim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taihim/subscriptions", "organizations_url": "https://api.github.com/users/taihim/orgs", "repos_url": "https://api.github.com/users/taihim/repos", "events_url": "https://api.github.com/users/taihim/events{/privacy}", "received_events_url": "https://api.github.com/users/taihim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,423,931,000
1,607,440,668,000
1,607,440,668,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1289", "html_url": "https://github.com/huggingface/datasets/pull/1289", "diff_url": "https://github.com/huggingface/datasets/pull/1289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1289.patch", "merged_at": null }
The dataset requires manually downloading data from Kaggle.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1289/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1288/comments
https://api.github.com/repos/huggingface/datasets/issues/1288/events
https://github.com/huggingface/datasets/pull/1288
759,309,457
MDExOlB1bGxSZXF1ZXN0NTM0MzM2Mzgz
1,288
Add CodeSearchNet corpus dataset
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq ready for a second review" ]
1,607,422,070,000
1,607,533,528,000
1,607,533,528,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1288", "html_url": "https://github.com/huggingface/datasets/pull/1288", "diff_url": "https://github.com/huggingface/datasets/pull/1288.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1288.patch", "merged_at": 1607533527000 }
This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https://github.com/github/CodeSearchNet I have had a few issues, mentioned below. Would appreciate some help on how to solve them. ## Issues generating dataset card Is there something wrong with my declaration of the dataset features ? ``` features=datasets.Features( { "repository_name": datasets.Value("string"), "func_path_in_repository": datasets.Value("string"), "func_name": datasets.Value("string"), "whole_func_string": datasets.Value("string"), "language": datasets.Value("string"), "func_code_string": datasets.Value("string"), "func_code_tokens": datasets.Sequence(datasets.Value("string")), "func_documentation_string": datasets.Value("string"), "func_documentation_tokens": datasets.Sequence(datasets.Value("string")), "split_name": datasets.Value("string"), "func_code_url": datasets.Value("string"), # TODO - add licensing info in the examples } ), ``` When running the streamlite app for tagging the dataset on my machine, I get the following error : ![image](https://user-images.githubusercontent.com/33657802/101469132-9ed12c80-3944-11eb-94ff-2d9c1d0ea080.png) ## Issues with dummy data Due to the unusual structure of the data, I have been unable to generate dummy data automatically. I tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data. ``` ============================================================================================== test session starts ============================================================================================== platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 plugins: xdist-2.1.0, forked-1.3.0 collected 1 item tests/test_dataset_common.py F [100%] =================================================================================================== FAILURES ==================================================================================================== ________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________ self = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net' @slow def test_load_dataset_all_configs(self, dataset_name): configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True) > self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True) tests/test_dataset_common.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_dataset_common.py:198: in check_load_dataset self.parent.assertTrue(len(dataset[split]) > 0) E AssertionError: False is not true --------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------- Downloading and preparing dataset code_search_net/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /tmp/tmppx78sj24/code_search_net/all/1.0.0... Dataset code_search_net downloaded and prepared to /tmp/tmppx78sj24/code_search_net/all/1.0.0. Subsequent calls will reuse this data. --------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------- ... (irrelevant info - Deprecation warnings) ============================================================================================ short test summary info ============================================================================================ FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true ========================================================================================= 1 failed, 4 warnings in 3.00s ======================================================================================== ``` ## Note : Data structure in S3 The data is stored on S3, and organized by programming languages. It is stored in the following repository structure: ``` . ├── <language_name> # e.g. python │   └── final │   └── jsonl │   ├── test │   │   └── <language_name>_test_0.jsonl.gz │   ├── train │   │   ├── <language_name>_train_0.jsonl.gz │   │   ├── <language_name>_train_1.jsonl.gz │   │   ├── ... │   │   └── <language_name>_train_n.jsonl.gz │   └── valid │   └── <language_name>_valid_0.jsonl.gz ├── <language_name>_dedupe_definitions_v2.pkl └── <language_name>_licenses.pkl ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1288/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1287/comments
https://api.github.com/repos/huggingface/datasets/issues/1287/events
https://github.com/huggingface/datasets/issues/1287
759,300,992
MDU6SXNzdWU3NTkzMDA5OTI=
1,287
'iwslt2017-ro-nl', cannot be downloaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "the same issue with datasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=split), ..... ", "even with setting master like the following command, still remains \r\n\r\ndatasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=\"train\", script_version=\"master\")\r\n", "Looks like the data has been moved from its original location to google drive\r\n\r\nNew url: https://drive.google.com/u/0/uc?id=12ycYSzLIG253AFN35Y6qoyf9wtkOjakp&export=download" ]
1,607,421,415,000
1,608,056,694,000
null
CONTRIBUTOR
null
null
null
Hi I am trying `>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")` getting this error thank you for your help ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/iwsl_t217/iwslt2017-ro-nl/1.0.0/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/iwslt2017/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd/iwslt2017.py", line 118, in _split_generators dl_dir = dl_manager.download_and_extract(MULTI_URL) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1287/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1286/comments
https://api.github.com/repos/huggingface/datasets/issues/1286/events
https://github.com/huggingface/datasets/issues/1286
759,291,509
MDU6SXNzdWU3NTkyOTE1MDk=
1,286
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I remember also getting the same issue for several other translation datasets like all the iwslt2017 group, this is blokcing me and I really need to fix it and I was wondering if you have an idea on this. @lhoestq thanks,. ", "maybe there is an empty line or something inside these datasets? could you tell me why this is happening? thanks ", "I just checked and the wmt16 en-ro doesn't have empty lines\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"wmt16\", \"ro-en\", split=\"train\")\r\nlen(d) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"en\"].strip()) > 0)) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"ro\"].strip()) > 0)) # 610320\r\n# also tested for split=\"validation\" and \"test\"\r\n```\r\n\r\nCan you open an issue on the `transformers` repo ? also cc @sgugger ", "Hi @lhoestq \r\nI am not really sure which part is causing this, to me this is more related to dataset library as this is happening for some of the datassets below please find the information to reprodcue the bug, this is really blocking me and I appreciate your help\r\n\r\n\r\n## Environment info\r\n- `transformers` version: 3.5.1\r\n- Platform: GPU\r\n- Python version: 3.7 \r\n- PyTorch version (GPU?): 1.0.4\r\n- Tensorflow version (GPU?): - \r\n- Using GPU in script?: - \r\n- Using distributed or parallel set-up in script?: - \r\n\r\n### Who can help\r\n tokenizers: @mfuntowicz\r\n Trainer: @sgugger\r\n TextGeneration: @TevenLeScao \r\n nlp datasets: [different repo](https://github.com/huggingface/nlp)\r\n rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)\r\n examples/seq2seq: @patil-suraj\r\n\r\n## Information\r\nHi\r\nI am testing seq2seq model with T5 on different datasets and this is always getting the following bug, this is really blocking me as this fails for many datasets. could you have a look please? thanks \r\n\r\n```\r\n[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n\r\n```\r\n\r\nTo reproduce the error please run on 1 GPU:\r\n```\r\ngit clone git@github.com:rabeehk/debug-seq2seq.git\r\npython setup.py develop \r\ncd seq2seq \r\npython finetune_t5_trainer.py temp.json\r\n\r\n```\r\n\r\nFull output of the program:\r\n\r\n```\r\n(internship) rkarimi@vgnh008:/idiap/user/rkarimi/dev/debug-seq2seq/seq2seq$ python finetune_t5_trainer.py temp.json \r\n2020-12-12 15:38:16.234542: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-12-12 15:38:16.234598: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n12/12/2020 15:38:32 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False\r\n12/12/2020 15:38:32 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='outputs/test', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=64, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.01, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2, max_steps=-1, warmup_steps=500, logging_dir='runs/Dec12_15-38-32_vgnh008', logging_first_step=True, logging_steps=200, save_steps=200, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=200, dataloader_num_workers=0, past_index=-1, run_name='outputs/test', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, label_smoothing=0.1, sortish_sampler=False, predict_with_generate=True, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear', fixed_length_emb=None, encoder_projection=None, encoder_pooling=None, projection_length=None, only_projection_bottleneck=False, concat_projection_token=False, gcs_bucket='ruse-xcloud-bucket', temperature=10, train_adapters=True, do_finetune=True, parametric_task_embedding=False, eval_output_dir='outputs/finetune-adapter/test-n-1-lr-1e-02-e-20')\r\nSome weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-6810ece2a440c3be.arrow\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-9a2822394a3a4e34.arrow\r\n12/12/2020 15:38:45 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b464cc20> for task boolq\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num examples = 10\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n{'loss': 529.79443359375, 'learning_rate': 2e-05, 'epoch': 1.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.37it/s]12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.43it/s]\r\n12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/test\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-164dd1d57e9fa69a.arrow\r\n12/12/2020 15:38:59 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b40c67a0> for task boolq\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num examples = 1\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from checkpoint, will skip to saved global_step\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from epoch 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from global step 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Will skip the first 0 steps in the first epoch\r\n 0%| | 0/2 [00:00<?, ?it/s]12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n 0%| | 0/2 [00:00<?, ?it/s]\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/finetune-adapter/test-n-1-lr-1e-02-e-20/boolq\r\n12/12/2020 15:39:07 - INFO - seq2seq.utils.utils - using task specific params for boolq: {'max_length': 3}\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation *****\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Num examples = 3269\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Batch size = 64\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 52/52 [00:12<00:00, 4.86it/s][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n```\r\n\r\n\r\n\r\n", "solved see https://github.com/huggingface/transformers/issues/9079?_pjax=%23js-repo-pjax-container ", "Hii please follow me" ]
1,607,420,655,000
1,607,801,782,000
1,607,790,156,000
CONTRIBUTOR
null
null
null
Hi I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help {'epoch': 20.0} 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00, 1.22it/s] 12/08/2020 10:41:19 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/experiment/joint/finetune/lr-2e-5 12/08/2020 10:41:24 - INFO - __main__ - {'wmt16-en-ro': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1998), 'qnli': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 5462), 'scitail': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1303)} 12/08/2020 10:41:24 - INFO - __main__ - *** Evaluate *** 12/08/2020 10:41:24 - INFO - seq2seq.utils.utils - using task specific params for wmt16-en-ro: {'max_length': 300, 'num_beams': 4} 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation ***** 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Num examples = 1998 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Batch size = 64 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:37<00:00, 1.19s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1286/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1285/comments
https://api.github.com/repos/huggingface/datasets/issues/1285/events
https://github.com/huggingface/datasets/issues/1285
759,278,758
MDU6SXNzdWU3NTkyNzg3NTg=
1,285
boolq does not work
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "here is the minimal code to reproduce\r\n\r\n`datasets>>> datasets.load_dataset(\"boolq\", \"train\")\r\n\r\nthe errors\r\n\r\n```\r\n`cahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nUsing custom data configuration train\r\nDownloading and preparing dataset boolq/train (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /idiap/temp/rkarimi/cache_home_1/datasets/boolq/train/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py\", line 74, in _split_generators\r\n downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py\", line 149, in download_custom\r\n custom_download(url, path)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py\", line 516, in copy_v2\r\n compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)\r\n\r\n\r\n\r\n```", "This has been fixed by #881 \r\nthis fix will be available in the next release soon.\r\n\r\nIf you don't want to wait for the release you can actually load the latest version of boolq by specifying `script_version=\"master\"` in `load_dataset`", "thank you this solved this issue, for now seems to work, thanks " ]
1,607,419,727,000
1,607,420,830,000
1,607,420,830,000
CONTRIBUTOR
null
null
null
Hi I am getting this error when trying to load boolq, thanks for your help ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock Traceback (most recent call last): File "finetune_t5_trainer.py", line 274, in <module> main() File "finetune_t5_trainer.py", line 147, in main for task in data_args.tasks] File "finetune_t5_trainer.py", line 147, in <listcomp> for task in data_args.tasks] File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 58, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 54, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1285/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/1284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1284/comments
https://api.github.com/repos/huggingface/datasets/issues/1284/events
https://github.com/huggingface/datasets/pull/1284
759,269,920
MDExOlB1bGxSZXF1ZXN0NTM0MzAzNDk0
1,284
Update coqa dataset url
{ "login": "ojasaar", "id": 73708394, "node_id": "MDQ6VXNlcjczNzA4Mzk0", "avatar_url": "https://avatars.githubusercontent.com/u/73708394?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ojasaar", "html_url": "https://github.com/ojasaar", "followers_url": "https://api.github.com/users/ojasaar/followers", "following_url": "https://api.github.com/users/ojasaar/following{/other_user}", "gists_url": "https://api.github.com/users/ojasaar/gists{/gist_id}", "starred_url": "https://api.github.com/users/ojasaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ojasaar/subscriptions", "organizations_url": "https://api.github.com/users/ojasaar/orgs", "repos_url": "https://api.github.com/users/ojasaar/repos", "events_url": "https://api.github.com/users/ojasaar/events{/privacy}", "received_events_url": "https://api.github.com/users/ojasaar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,418,998,000
1,607,451,549,000
1,607,451,549,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1284", "html_url": "https://github.com/huggingface/datasets/pull/1284", "diff_url": "https://github.com/huggingface/datasets/pull/1284.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1284.patch", "merged_at": 1607451549000 }
`datasets.stanford.edu` is invalid.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1284/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1283/comments
https://api.github.com/repos/huggingface/datasets/issues/1283/events
https://github.com/huggingface/datasets/pull/1283
759,251,457
MDExOlB1bGxSZXF1ZXN0NTM0Mjg4MDg2
1,283
Add dutch book review dataset
{ "login": "benjaminvdb", "id": 8875786, "node_id": "MDQ6VXNlcjg4NzU3ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminvdb", "html_url": "https://github.com/benjaminvdb", "followers_url": "https://api.github.com/users/benjaminvdb/followers", "following_url": "https://api.github.com/users/benjaminvdb/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminvdb/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminvdb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminvdb/subscriptions", "organizations_url": "https://api.github.com/users/benjaminvdb/orgs", "repos_url": "https://api.github.com/users/benjaminvdb/repos", "events_url": "https://api.github.com/users/benjaminvdb/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminvdb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Really cool thanks !\r\n> \r\n> I left some (minor) comments\r\n\r\nThank you for your comments! 👍 I went ahead and improved the dataset card using your suggestions and some tweaks of my own. I hope you like it! 😄" ]
1,607,417,448,000
1,607,545,318,000
1,607,534,725,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1283", "html_url": "https://github.com/huggingface/datasets/pull/1283", "diff_url": "https://github.com/huggingface/datasets/pull/1283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1283.patch", "merged_at": 1607534725000 }
- Name: Dutch Book Review Dataset (DBRD) - Description: The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch. - Paper: https://arxiv.org/abs/1910.00896 - Data: https://github.com/benjaminvdb/DBRD - Motivation: A large (real-life) dataset of Dutch book reviews and sentiment polarity (positive/negative), based on the associated rating. Checks - [x] Create the dataset script /datasets/dbrd/dbrd.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _info(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1283/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1282/comments
https://api.github.com/repos/huggingface/datasets/issues/1282/events
https://github.com/huggingface/datasets/pull/1282
759,208,335
MDExOlB1bGxSZXF1ZXN0NTM0MjQ4NzI5
1,282
add thaiqa_squad
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,415,278,000
1,607,452,578,000
1,607,452,578,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1282", "html_url": "https://github.com/huggingface/datasets/pull/1282", "diff_url": "https://github.com/huggingface/datasets/pull/1282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1282.patch", "merged_at": 1607452578000 }
Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers. `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1282/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1282/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1281/comments
https://api.github.com/repos/huggingface/datasets/issues/1281/events
https://github.com/huggingface/datasets/pull/1281
759,203,317
MDExOlB1bGxSZXF1ZXN0NTM0MjQ0MTA1
1,281
adding hybrid_qa
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,415,019,000
1,607,450,968,000
1,607,450,820,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1281", "html_url": "https://github.com/huggingface/datasets/pull/1281", "diff_url": "https://github.com/huggingface/datasets/pull/1281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1281.patch", "merged_at": 1607450820000 }
Adding HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data https://github.com/wenhuchen/HybridQA
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1281/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1280/comments
https://api.github.com/repos/huggingface/datasets/issues/1280/events
https://github.com/huggingface/datasets/pull/1280
759,151,028
MDExOlB1bGxSZXF1ZXN0NTM0MTk2MDc0
1,280
disaster response messages dataset
{ "login": "darshan-gandhi", "id": 44197177, "node_id": "MDQ6VXNlcjQ0MTk3MTc3", "avatar_url": "https://avatars.githubusercontent.com/u/44197177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/darshan-gandhi", "html_url": "https://github.com/darshan-gandhi", "followers_url": "https://api.github.com/users/darshan-gandhi/followers", "following_url": "https://api.github.com/users/darshan-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/darshan-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/darshan-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darshan-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/darshan-gandhi/orgs", "repos_url": "https://api.github.com/users/darshan-gandhi/repos", "events_url": "https://api.github.com/users/darshan-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/darshan-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I have added the Readme.md as well, the PR is ready for review. \r\n\r\nThank you ", "Hi @lhoestq I have updated the code and files. Please if you could check once.\r\n\r\nThank you" ]
1,607,412,436,000
1,607,530,917,000
1,607,530,917,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1280", "html_url": "https://github.com/huggingface/datasets/pull/1280", "diff_url": "https://github.com/huggingface/datasets/pull/1280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1280.patch", "merged_at": 1607530917000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1280/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1279/comments
https://api.github.com/repos/huggingface/datasets/issues/1279/events
https://github.com/huggingface/datasets/pull/1279
759,108,726
MDExOlB1bGxSZXF1ZXN0NTM0MTU4OTY5
1,279
added para_pat
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Updated with Translation feature type. Working on dataset tags and README", "merging since the CI is fixed on master" ]
1,607,408,927,000
1,607,953,277,000
1,607,953,277,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1279", "html_url": "https://github.com/huggingface/datasets/pull/1279", "diff_url": "https://github.com/huggingface/datasets/pull/1279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1279.patch", "merged_at": 1607953277000 }
Dataset link : https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632 Working on README.md currently
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1279/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1278/comments
https://api.github.com/repos/huggingface/datasets/issues/1278/events
https://github.com/huggingface/datasets/pull/1278
758,988,465
MDExOlB1bGxSZXF1ZXN0NTM0MDYwNDY5
1,278
Craigslist bargains
{ "login": "ZacharySBrown", "id": 7950786, "node_id": "MDQ6VXNlcjc5NTA3ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/7950786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZacharySBrown", "html_url": "https://github.com/ZacharySBrown", "followers_url": "https://api.github.com/users/ZacharySBrown/followers", "following_url": "https://api.github.com/users/ZacharySBrown/following{/other_user}", "gists_url": "https://api.github.com/users/ZacharySBrown/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZacharySBrown/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZacharySBrown/subscriptions", "organizations_url": "https://api.github.com/users/ZacharySBrown/orgs", "repos_url": "https://api.github.com/users/ZacharySBrown/repos", "events_url": "https://api.github.com/users/ZacharySBrown/events{/privacy}", "received_events_url": "https://api.github.com/users/ZacharySBrown/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Seeing this in the CircleCI builds, this is what I was originally getting before I started messing around with the download URLS to try to fix this:\r\n\r\n`FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpwvji917g/extracted/d6185140afb24ad8fee67392100a478269cba286b0d88915a137fdf88872de14/dummy_data/train__VARIABLE_MISUSE__SStuB.txt-00001-of-00300'`\r\n\r\nCould this be because of the files in my `dummy_data.zip`? I had to manually create it, and it looked like the test was looking for the following files, so I created the `.zip` with this structure:\r\n\r\n```\r\nArchive: dummy_data.zip\r\n creating: dummy_data/\r\n inflating: dummy_data/blobtest \r\n inflating: dummy_data/parsed.jsontrain \r\n inflating: dummy_data/parsed.jsonvalidation \r\n```", "Going to close this out and link to a new (cleaner) PR" ]
1,607,391,955,000
1,607,474,775,000
1,607,474,775,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1278", "html_url": "https://github.com/huggingface/datasets/pull/1278", "diff_url": "https://github.com/huggingface/datasets/pull/1278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1278.patch", "merged_at": null }
`craigslist_bargains` dataset from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1278/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1276/comments
https://api.github.com/repos/huggingface/datasets/issues/1276/events
https://github.com/huggingface/datasets/pull/1276
758,965,936
MDExOlB1bGxSZXF1ZXN0NTM0MDQyODYy
1,276
add One Million Posts Corpus
{ "login": "aseifert", "id": 4944799, "node_id": "MDQ6VXNlcjQ5NDQ3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aseifert", "html_url": "https://github.com/aseifert", "followers_url": "https://api.github.com/users/aseifert/followers", "following_url": "https://api.github.com/users/aseifert/following{/other_user}", "gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}", "starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aseifert/subscriptions", "organizations_url": "https://api.github.com/users/aseifert/orgs", "repos_url": "https://api.github.com/users/aseifert/repos", "events_url": "https://api.github.com/users/aseifert/events{/privacy}", "received_events_url": "https://api.github.com/users/aseifert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,388,608,000
1,607,711,298,000
1,607,711,298,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1276", "html_url": "https://github.com/huggingface/datasets/pull/1276", "diff_url": "https://github.com/huggingface/datasets/pull/1276.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1276.patch", "merged_at": 1607711298000 }
- **Name:** One Million Posts Corpus - **Description:** The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711 - **Data:** https://github.com/OFAI/million-post-corpus - **Motivation:** Big German (real-life) dataset containing different annotations around forum moderation with expert annotations. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1276/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1275/comments
https://api.github.com/repos/huggingface/datasets/issues/1275/events
https://github.com/huggingface/datasets/pull/1275
758,958,066
MDExOlB1bGxSZXF1ZXN0NTM0MDM2NjIw
1,275
Yoruba GV NER added
{ "login": "dadelani", "id": 23586676, "node_id": "MDQ6VXNlcjIzNTg2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dadelani", "html_url": "https://github.com/dadelani", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "organizations_url": "https://api.github.com/users/dadelani/orgs", "repos_url": "https://api.github.com/users/dadelani/repos", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "received_events_url": "https://api.github.com/users/dadelani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you. Okay, I will add the dataset card." ]
1,607,387,498,000
1,607,469,928,000
1,607,469,928,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1275", "html_url": "https://github.com/huggingface/datasets/pull/1275", "diff_url": "https://github.com/huggingface/datasets/pull/1275.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1275.patch", "merged_at": null }
I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1275/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1274/comments
https://api.github.com/repos/huggingface/datasets/issues/1274/events
https://github.com/huggingface/datasets/pull/1274
758,943,174
MDExOlB1bGxSZXF1ZXN0NTM0MDI0MTQx
1,274
oclar-dataset
{ "login": "alaameloh", "id": 26907161, "node_id": "MDQ6VXNlcjI2OTA3MTYx", "avatar_url": "https://avatars.githubusercontent.com/u/26907161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaameloh", "html_url": "https://github.com/alaameloh", "followers_url": "https://api.github.com/users/alaameloh/followers", "following_url": "https://api.github.com/users/alaameloh/following{/other_user}", "gists_url": "https://api.github.com/users/alaameloh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaameloh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaameloh/subscriptions", "organizations_url": "https://api.github.com/users/alaameloh/orgs", "repos_url": "https://api.github.com/users/alaameloh/repos", "events_url": "https://api.github.com/users/alaameloh/events{/privacy}", "received_events_url": "https://api.github.com/users/alaameloh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,385,405,000
1,607,528,168,000
1,607,528,168,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1274", "html_url": "https://github.com/huggingface/datasets/pull/1274", "diff_url": "https://github.com/huggingface/datasets/pull/1274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1274.patch", "merged_at": 1607528168000 }
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on reviews, including hotels, restaurants, shops, and others. : [homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1274/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1273/comments
https://api.github.com/repos/huggingface/datasets/issues/1273/events
https://github.com/huggingface/datasets/pull/1273
758,935,768
MDExOlB1bGxSZXF1ZXN0NTM0MDE4MjQ2
1,273
Created wiki_movies dataset.
{ "login": "aclifton314", "id": 53267795, "node_id": "MDQ6VXNlcjUzMjY3Nzk1", "avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aclifton314", "html_url": "https://github.com/aclifton314", "followers_url": "https://api.github.com/users/aclifton314/followers", "following_url": "https://api.github.com/users/aclifton314/following{/other_user}", "gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}", "starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions", "organizations_url": "https://api.github.com/users/aclifton314/orgs", "repos_url": "https://api.github.com/users/aclifton314/repos", "events_url": "https://api.github.com/users/aclifton314/events{/privacy}", "received_events_url": "https://api.github.com/users/aclifton314/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like your PR includes changes about many other files than the ones for wiki_movies\r\n\r\nCan you create another branch and another PR please ?", "I'm happy to. What's the best way to do that (sorry, I'm new to PRs etc.)?", "Sure !\r\n\r\nFirst please save your new dataset files somewhere.\r\nThen you can do in this order:\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push\r\ngit checkout -b my-new-branch-name\r\n```\r\nThis will create a new branch from the updated master branch.\r\nThen you can re-add your files and commit + push them\r\n\r\nOnce it's done you should be able to create a new PR using your new branch :) ", "Done!", "closing in favor of #1485 " ]
1,607,384,334,000
1,607,954,209,000
1,607,954,209,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1273", "html_url": "https://github.com/huggingface/datasets/pull/1273", "diff_url": "https://github.com/huggingface/datasets/pull/1273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1273.patch", "merged_at": null }
First PR (ever). Hopefully this movies dataset is useful to others!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1273/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1272/comments
https://api.github.com/repos/huggingface/datasets/issues/1272/events
https://github.com/huggingface/datasets/pull/1272
758,924,960
MDExOlB1bGxSZXF1ZXN0NTM0MDA5MTk0
1,272
Psc
{ "login": "abecadel", "id": 1654113, "node_id": "MDQ6VXNlcjE2NTQxMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1654113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abecadel", "html_url": "https://github.com/abecadel", "followers_url": "https://api.github.com/users/abecadel/followers", "following_url": "https://api.github.com/users/abecadel/following{/other_user}", "gists_url": "https://api.github.com/users/abecadel/gists{/gist_id}", "starred_url": "https://api.github.com/users/abecadel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abecadel/subscriptions", "organizations_url": "https://api.github.com/users/abecadel/orgs", "repos_url": "https://api.github.com/users/abecadel/repos", "events_url": "https://api.github.com/users/abecadel/events{/privacy}", "received_events_url": "https://api.github.com/users/abecadel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,383,176,000
1,607,384,885,000
1,607,384,868,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1272", "html_url": "https://github.com/huggingface/datasets/pull/1272", "diff_url": "https://github.com/huggingface/datasets/pull/1272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1272.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1272/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1271/comments
https://api.github.com/repos/huggingface/datasets/issues/1271/events
https://github.com/huggingface/datasets/pull/1271
758,924,203
MDExOlB1bGxSZXF1ZXN0NTM0MDA4NTg4
1,271
SMS Spam Dataset
{ "login": "czabo", "id": 75574105, "node_id": "MDQ6VXNlcjc1NTc0MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/75574105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/czabo", "html_url": "https://github.com/czabo", "followers_url": "https://api.github.com/users/czabo/followers", "following_url": "https://api.github.com/users/czabo/following{/other_user}", "gists_url": "https://api.github.com/users/czabo/gists{/gist_id}", "starred_url": "https://api.github.com/users/czabo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/czabo/subscriptions", "organizations_url": "https://api.github.com/users/czabo/orgs", "repos_url": "https://api.github.com/users/czabo/repos", "events_url": "https://api.github.com/users/czabo/events{/privacy}", "received_events_url": "https://api.github.com/users/czabo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,383,086,000
1,607,449,339,000
1,607,449,339,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1271", "html_url": "https://github.com/huggingface/datasets/pull/1271", "diff_url": "https://github.com/huggingface/datasets/pull/1271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1271.patch", "merged_at": 1607449339000 }
Hi :) I added this [SMS Spam Dataset](http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1271/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1270/comments
https://api.github.com/repos/huggingface/datasets/issues/1270/events
https://github.com/huggingface/datasets/pull/1270
758,917,216
MDExOlB1bGxSZXF1ZXN0NTM0MDAyODIz
1,270
add DFKI SmartData Corpus
{ "login": "aseifert", "id": 4944799, "node_id": "MDQ6VXNlcjQ5NDQ3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aseifert", "html_url": "https://github.com/aseifert", "followers_url": "https://api.github.com/users/aseifert/followers", "following_url": "https://api.github.com/users/aseifert/following{/other_user}", "gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}", "starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aseifert/subscriptions", "organizations_url": "https://api.github.com/users/aseifert/orgs", "repos_url": "https://api.github.com/users/aseifert/repos", "events_url": "https://api.github.com/users/aseifert/events{/privacy}", "received_events_url": "https://api.github.com/users/aseifert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,382,228,000
1,607,449,283,000
1,607,449,283,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1270", "html_url": "https://github.com/huggingface/datasets/pull/1270", "diff_url": "https://github.com/huggingface/datasets/pull/1270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1270.patch", "merged_at": 1607449283000 }
- **Name:** DFKI SmartData Corpus - **Description:** DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. - **Paper:** https://www.dfki.de/fileadmin/user_upload/import/9427_lrec_smartdata_corpus.pdf - **Data:** https://github.com/DFKI-NLP/smartdata-corpus - **Motivation:** Contains fine-grained NER labels for German. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1270/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1269/comments
https://api.github.com/repos/huggingface/datasets/issues/1269/events
https://github.com/huggingface/datasets/pull/1269
758,886,174
MDExOlB1bGxSZXF1ZXN0NTMzOTc3MTE2
1,269
Adding OneStopEnglish corpus dataset
{ "login": "purvimisal", "id": 22298787, "node_id": "MDQ6VXNlcjIyMjk4Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/22298787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/purvimisal", "html_url": "https://github.com/purvimisal", "followers_url": "https://api.github.com/users/purvimisal/followers", "following_url": "https://api.github.com/users/purvimisal/following{/other_user}", "gists_url": "https://api.github.com/users/purvimisal/gists{/gist_id}", "starred_url": "https://api.github.com/users/purvimisal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/purvimisal/subscriptions", "organizations_url": "https://api.github.com/users/purvimisal/orgs", "repos_url": "https://api.github.com/users/purvimisal/repos", "events_url": "https://api.github.com/users/purvimisal/events{/privacy}", "received_events_url": "https://api.github.com/users/purvimisal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, thanks for the review.\r\nI have made all the changes, PTAL! :) " ]
1,607,378,711,000
1,607,539,418,000
1,607,528,033,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1269", "html_url": "https://github.com/huggingface/datasets/pull/1269", "diff_url": "https://github.com/huggingface/datasets/pull/1269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1269.patch", "merged_at": 1607528033000 }
This PR adds OneStopEnglish Corpus containing texts classified into reading levels (elementary, intermediate, advance) for automatic readability assessment and text simplification. Link to the paper: https://www.aclweb.org/anthology/W18-0535.pdf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1269/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1268/comments
https://api.github.com/repos/huggingface/datasets/issues/1268/events
https://github.com/huggingface/datasets/pull/1268
758,871,252
MDExOlB1bGxSZXF1ZXN0NTMzOTY0OTQ4
1,268
new pr for Turkish NER
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Can you run `make style` to fix the code format ?\r\n\r\nAlso it looks like the file `file_downloaded/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.DUMP` is missing inside the dummy_data.zip\r\n\r\n\r\n(note that `TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip` is a directory name, not an actual zip file)", "Hi Quentin, thank you for your patience with me. I've fixed the preprocessing pipeline, got this very weird error that Yacine told me to push. I've pushed it and after I'll find out that it will work, I will have my final pr on styling.", "looks like you removed the dataset script file in your latest commit, is it expected ?" ]
1,607,377,226,000
1,607,521,505,000
1,607,521,505,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1268", "html_url": "https://github.com/huggingface/datasets/pull/1268", "diff_url": "https://github.com/huggingface/datasets/pull/1268.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1268.patch", "merged_at": 1607521505000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1268/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1267/comments
https://api.github.com/repos/huggingface/datasets/issues/1267/events
https://github.com/huggingface/datasets/pull/1267
758,826,568
MDExOlB1bGxSZXF1ZXN0NTMzOTMwNzU2
1,267
Has part
{ "login": "jeromeku", "id": 2455711, "node_id": "MDQ6VXNlcjI0NTU3MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2455711?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeromeku", "html_url": "https://github.com/jeromeku", "followers_url": "https://api.github.com/users/jeromeku/followers", "following_url": "https://api.github.com/users/jeromeku/following{/other_user}", "gists_url": "https://api.github.com/users/jeromeku/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeromeku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeromeku/subscriptions", "organizations_url": "https://api.github.com/users/jeromeku/orgs", "repos_url": "https://api.github.com/users/jeromeku/repos", "events_url": "https://api.github.com/users/jeromeku/events{/privacy}", "received_events_url": "https://api.github.com/users/jeromeku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,373,123,000
1,607,711,142,000
1,607,711,142,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1267", "html_url": "https://github.com/huggingface/datasets/pull/1267", "diff_url": "https://github.com/huggingface/datasets/pull/1267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1267.patch", "merged_at": 1607711142000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1267/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1266/comments
https://api.github.com/repos/huggingface/datasets/issues/1266/events
https://github.com/huggingface/datasets/pull/1266
758,704,178
MDExOlB1bGxSZXF1ZXN0NTMzODMyNTQ1
1,266
removing unzipped hansards dummy data
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,362,276,000
1,607,362,349,000
1,607,362,349,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1266", "html_url": "https://github.com/huggingface/datasets/pull/1266", "diff_url": "https://github.com/huggingface/datasets/pull/1266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1266.patch", "merged_at": 1607362348000 }
which were added by mistake
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1266/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1265/comments
https://api.github.com/repos/huggingface/datasets/issues/1265/events
https://github.com/huggingface/datasets/pull/1265
758,687,223
MDExOlB1bGxSZXF1ZXN0NTMzODE4NjY0
1,265
Add CovidQA dataset
{ "login": "olinguyen", "id": 4341867, "node_id": "MDQ6VXNlcjQzNDE4Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olinguyen", "html_url": "https://github.com/olinguyen", "followers_url": "https://api.github.com/users/olinguyen/followers", "following_url": "https://api.github.com/users/olinguyen/following{/other_user}", "gists_url": "https://api.github.com/users/olinguyen/gists{/gist_id}", "starred_url": "https://api.github.com/users/olinguyen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/olinguyen/subscriptions", "organizations_url": "https://api.github.com/users/olinguyen/orgs", "repos_url": "https://api.github.com/users/olinguyen/repos", "events_url": "https://api.github.com/users/olinguyen/events{/privacy}", "received_events_url": "https://api.github.com/users/olinguyen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It seems to share the same name as this dataset: https://openreview.net/forum?id=JENSKEEzsoU", "> It seems to share the same name as this dataset: https://openreview.net/forum?id=JENSKEEzsoU\r\n\r\nyou're right it can be confusing. I'll add the organization/research group for clarity: `covid_qa_castorini`. I added the dataset you shared as `covid_qa_deepset` in another PR (#1182) ", "Thanks for avoiding the name collision !" ]
1,607,360,811,000
1,607,446,946,000
1,607,446,946,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1265", "html_url": "https://github.com/huggingface/datasets/pull/1265", "diff_url": "https://github.com/huggingface/datasets/pull/1265.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1265.patch", "merged_at": 1607446946000 }
This PR adds CovidQA, a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge. Link to the paper: https://arxiv.org/pdf/2004.11339.pdf Link to the homepage: https://covidqa.ai
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1265/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1264/comments
https://api.github.com/repos/huggingface/datasets/issues/1264/events
https://github.com/huggingface/datasets/pull/1264
758,686,474
MDExOlB1bGxSZXF1ZXN0NTMzODE4MDM2
1,264
enriched webnlg dataset rebase
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I've removed the `en` within `de` and reciprocally; but I don't think I will be able to thin it more than this. (Edit: ignore the close, I missclicked !)" ]
1,607,360,745,000
1,607,533,229,000
1,607,533,227,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1264", "html_url": "https://github.com/huggingface/datasets/pull/1264", "diff_url": "https://github.com/huggingface/datasets/pull/1264.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1264.patch", "merged_at": 1607533227000 }
Rebase of #1206 !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1264/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1263/comments
https://api.github.com/repos/huggingface/datasets/issues/1263/events
https://github.com/huggingface/datasets/pull/1263
758,663,787
MDExOlB1bGxSZXF1ZXN0NTMzNzk5NzU5
1,263
Added kannada news headlines classification dataset.
{ "login": "vrindaprabhu", "id": 16264631, "node_id": "MDQ6VXNlcjE2MjY0NjMx", "avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vrindaprabhu", "html_url": "https://github.com/vrindaprabhu", "followers_url": "https://api.github.com/users/vrindaprabhu/followers", "following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions", "organizations_url": "https://api.github.com/users/vrindaprabhu/orgs", "repos_url": "https://api.github.com/users/vrindaprabhu/repos", "events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}", "received_events_url": "https://api.github.com/users/vrindaprabhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Let me know if any more comments! Will fix it! :-)" ]
1,607,358,937,000
1,607,610,655,000
1,607,536,891,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1263", "html_url": "https://github.com/huggingface/datasets/pull/1263", "diff_url": "https://github.com/huggingface/datasets/pull/1263.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1263.patch", "merged_at": 1607536891000 }
Manual Download of a kaggle dataset. Mostly followed process as ms_terms.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1263/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1262/comments
https://api.github.com/repos/huggingface/datasets/issues/1262/events
https://github.com/huggingface/datasets/pull/1262
758,637,124
MDExOlB1bGxSZXF1ZXN0NTMzNzc3OTcy
1,262
Adding msr_genomics_kbcomp dataset
{ "login": "manandey", "id": 6687858, "node_id": "MDQ6VXNlcjY2ODc4NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manandey", "html_url": "https://github.com/manandey", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "organizations_url": "https://api.github.com/users/manandey/orgs", "repos_url": "https://api.github.com/users/manandey/repos", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "received_events_url": "https://api.github.com/users/manandey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,356,890,000
1,607,450,935,000
1,607,450,927,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1262", "html_url": "https://github.com/huggingface/datasets/pull/1262", "diff_url": "https://github.com/huggingface/datasets/pull/1262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1262.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1262/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1261/comments
https://api.github.com/repos/huggingface/datasets/issues/1261/events
https://github.com/huggingface/datasets/pull/1261
758,626,112
MDExOlB1bGxSZXF1ZXN0NTMzNzY4OTgy
1,261
Add Google Sentence Compression dataset
{ "login": "mattbui", "id": 46804938, "node_id": "MDQ6VXNlcjQ2ODA0OTM4", "avatar_url": "https://avatars.githubusercontent.com/u/46804938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mattbui", "html_url": "https://github.com/mattbui", "followers_url": "https://api.github.com/users/mattbui/followers", "following_url": "https://api.github.com/users/mattbui/following{/other_user}", "gists_url": "https://api.github.com/users/mattbui/gists{/gist_id}", "starred_url": "https://api.github.com/users/mattbui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mattbui/subscriptions", "organizations_url": "https://api.github.com/users/mattbui/orgs", "repos_url": "https://api.github.com/users/mattbui/repos", "events_url": "https://api.github.com/users/mattbui/events{/privacy}", "received_events_url": "https://api.github.com/users/mattbui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,356,063,000
1,607,446,919,000
1,607,446,919,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1261", "html_url": "https://github.com/huggingface/datasets/pull/1261", "diff_url": "https://github.com/huggingface/datasets/pull/1261.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1261.patch", "merged_at": 1607446919000 }
For more information: https://www.aclweb.org/anthology/D13-1155.pdf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1261/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1260/comments
https://api.github.com/repos/huggingface/datasets/issues/1260/events
https://github.com/huggingface/datasets/pull/1260
758,601,828
MDExOlB1bGxSZXF1ZXN0NTMzNzQ4ODM3
1,260
Added NewsPH Raw Dataset
{ "login": "jcblaisecruz02", "id": 24757547, "node_id": "MDQ6VXNlcjI0NzU3NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcblaisecruz02", "html_url": "https://github.com/jcblaisecruz02", "followers_url": "https://api.github.com/users/jcblaisecruz02/followers", "following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}", "gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions", "organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs", "repos_url": "https://api.github.com/users/jcblaisecruz02/repos", "events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}", "received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like this PR has changes to many files other than the ones for `NewsPH`\r\n\r\nCan you create another branch and another PR please ?" ]
1,607,354,273,000
1,607,444,835,000
1,607,444,835,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1260", "html_url": "https://github.com/huggingface/datasets/pull/1260", "diff_url": "https://github.com/huggingface/datasets/pull/1260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1260.patch", "merged_at": null }
Added the raw version of the NewsPH dataset, which was used to automatically generate the NewsPH-NLI corpus. Dataset of news articles in Filipino from mainstream Philippine news sites on the internet. Can be used as a language modeling dataset or to reproduce the NewsPH-NLI dataset. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1260/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/1259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1259/comments
https://api.github.com/repos/huggingface/datasets/issues/1259/events
https://github.com/huggingface/datasets/pull/1259
758,565,320
MDExOlB1bGxSZXF1ZXN0NTMzNzE4NjMz
1,259
Add KorQPair dataset
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "dummy data is missing", "Hey @cceyda, thanks for pointing that out. I thought I'd added it, but seems like that wasn't the case. Just pushed a new commit with the dummy data." ]
1,607,351,637,000
1,640,738,980,000
1,607,440,301,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1259", "html_url": "https://github.com/huggingface/datasets/pull/1259", "diff_url": "https://github.com/huggingface/datasets/pull/1259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1259.patch", "merged_at": 1607440301000 }
This PR adds a [Korean paired question dataset](https://github.com/songys/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1259/timeline
null
true